!pip install torchviz # You have to run this code to get model visualization from torchviz make_dot
import torch
import torch.nn as nn
from torchsummary import summary
from torchviz import make_dot
import torch.nn.functional as F
import torch.optim as optim
import torch.utils.data as data
from PIL import Image
import cv2
import torchvision.transforms as transforms
import torchvision.datasets as datasets
from torch.utils.data import Dataset
from torchvision.transforms.functional import adjust_gamma
from sklearn import metrics
from sklearn import decomposition
from sklearn import manifold
from tqdm.notebook import trange, tqdm
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import copy
import random
import time
from collections import Counter
import glob
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
import numpy as np
import os
import pandas as pd
from PIL import Image
from sklearn.preprocessing import StandardScaler
from sklearn.exceptions import ConvergenceWarning
from sklearn.metrics import accuracy_score, mean_squared_error, roc_auc_score
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay
from sklearn.model_selection import train_test_split
import tarfile
from tqdm.notebook import tqdm
import warnings
warnings.filterwarnings("ignore")
# This section works only on the default Google Colab and not on Custom GCE VM or our local Jupyter Notebook
from google.colab import auth
from google.auth import default
from google.colab import drive
import gspread
'''
Authenticating against Google Sheets from the current Colab session
Google Snippets Colab Notebook:
https://colab.research.google.com/notebooks/snippets/sheets.ipynb
'''
auth.authenticate_user()
creds, _ = default()
gc = gspread.authorize(creds)
'''
Mounting Google Drive to the current Colab session
How to Connect Google Colab with Google Drive:
https://www.marktechpost.com/2019/06/07/how-to-connect-google-colab-with-google-drive/
'''
drive.mount('/content/drive')
Mounted at /content/drive
We meet about 4 times per week. Due to our different schedules, we are not set on any specific days but we always pick the next meeting date&time based on how many tasks need to be reviewed and our availability.
Data below is based on TeamTable Google Sheet.
Our meetings so far:
'''
Loading content from a Google Sheets file and printing out the output
Google Snippets Colab Notebook:
https://colab.research.google.com/notebooks/snippets/sheets.ipynb
'''
# sheet 1 -> Leaders
# sheet 2 -> Meeting_Schedule
# sheet 3 -> Gantt_Chart
worksheet = gc.open('TeamTable').worksheet('Meeting_Schedule') # Load only the "Meeting_Schedule" sheet from the TeamTable file
rows = worksheet.get_all_values() # Get the list of all rows from a specific sheet
pd.set_option('display.max_colwidth', 0)
meetings = pd.DataFrame.from_records(rows[1:], columns=[rows[0]]) # Create a Pandas dataframe based on all the rows
meetings
| DATE | TIME | TOPICS | |
|---|---|---|---|
| 0 | 2023/03/22 | 4:00 PM | Introductions, logistics 1, project preferences, etc. |
| 1 | 2023/03/26 | 6:00 PM | Picking the type of project, logistics 2, submitting Phase 0, starting Phase 1, etc. |
| 2 | 2023/03/29 | 4:30 PM | Updates, organizing tasks, sharing resources, etc. |
| 3 | 2023/03/31 | 9:00 PM | Updates, reviewing&helping with current tasks |
| 4 | 2023/04/02 | 10:00 AM | Updates, wrapping up current tasks |
| 5 | 2023/04/03 | 8:00 PM | Updates, finalizing Phase 1 |
| 6 | 2023/04/04 | 6:00 PM | Submitting Phase 1, dividing our work on classification and regression models |
| 7 | 2023/04/08 | 5:00 PM | Phase 2 status update, classification and regression models progress, creating cloud environment, starting with the report, etc. |
| 8 | 2023/04/09 | 7:00 PM | Phase 2 status update |
| 9 | 2023/04/10 | 5:00 PM | Phase 2 status update, working on report, final hyperparameter tuning in the cloud, etc. |
| 10 | 2023/04/11 | 6:00 PM | Wrapping up and submitting Phase 2, dividing our work for Phase 3 |
| 11 | 2023/04/14 | 8:00 PM | Phase 3 status update, sharing issues with PyTorch, coordinating tasks, etc. |
| 12 | 2023/04/15 | 10:00 AM | Phase 3 status update, standardizing pre-processing and model creation approach, etc. |
| 13 | 2023/04/16 | 2:00 PM | Phase 3 status update |
| 14 | 2023/04/17 | 6:00 PM | Phase 3 status update, wrapping up and testing code, working on presentation, etc. |
| 15 | 2023/04/18 | 6:00 PM | Wrapping up and submitting Phase 3, dividing our work for Phase 3 |
| 16 | 2023/04/20 | 7:00 PM | Preparation for presentation during Friday's lab session, dividing our work for Phase 4 |
| 17 | 2023/04/23 | 10:00 AM | Phase 4 status update |
| 18 | 2023/04/24 | 5:00 PM | Phase 4 status update, wrapping up and testing code, working on presentation, etc. |
| 19 | 2023/04/25 | 6:00 PM | Wrapping up and submitting Phase 4, dividing our work for Phase 3 |
We have decided Phase leader will be responsible for these tasks (either doing them or delegating them):
Data below is based on TeamTable Google Sheet.
'''
Loading content from a Google Sheets file and printing out the output
Google Snippets Colab Notebook:
https://colab.research.google.com/notebooks/snippets/sheets.ipynb
'''
# sheet 1 -> Leaders
# sheet 2 -> Meeting_Schedule
# sheet 3 -> Gantt_Chart
worksheet = gc.open('TeamTable').worksheet('Leaders') # Load only the "Leaders" sheet from the TeamTable file
rows = worksheet.get_all_values() # Get the list of all rows from a specific sheet
pd.set_option('display.max_colwidth', 0)
team = pd.DataFrame.from_records(rows[1:], columns=[rows[0]]) # Create a Pandas dataframe based on all the rows
team
| LEADER | PHASE | |
|---|---|---|
| 0 | Vicente De Leon | 1 |
| 1 | Kelly Craig | 2 |
| 2 | Courtney Payton | 3 |
| 3 | Martin Berth | 4 |
Legend:
Data below is based on TeamTable Google Sheet.
'''
Loading content from a Google Sheets file
Google Snippets Colab Notebook:
https://colab.research.google.com/notebooks/snippets/sheets.ipynb
'''
# sheet 1 -> Leaders
# sheet 2 -> Meeting_Schedule
# sheet 3 -> Gantt_Chart
worksheet = gc.open('TeamTable').worksheet('Gantt_Chart') # Load only the "Gantt_Chart" sheet from the TeamTable file
rows = worksheet.get_all_values() # Get the list of all rows from a specific sheet
pd.set_option('display.max_colwidth', 0)
pd.set_option('display.max_rows', 500)
goals = pd.DataFrame.from_records(rows[6:], columns=[rows[5]]) # Create a Pandas dataframe based on all the rows
goals = goals[['PHASE', 'TASK', 'ASSIGNEE', 'CREDIT', 'DESCRIPTION']] # Select only relevant columns
goals.columns = ['PHASE', 'TASK', 'ASSIGNEE', 'CREDIT', 'DESCRIPTION'] # Correct the columns
# Ignore columns with "Phase n" and hide Panda's index column
display(goals.loc[~goals['TASK'].isin(['Phase 0', 'Phase 1', 'Phase 2' ,'Phase 3', 'Phase 4'])].style.hide(axis="index"))
| PHASE | TASK | ASSIGNEE | CREDIT | DESCRIPTION |
|---|---|---|---|---|
| 0 | Team logistics | Vicente | Courtney, Kelly, Martin, Vicente | We will decide on the team logistics and create them (creating a zoom room for meetings, creating Discord server for communication, creating a shared Google Drive location, selecting leaders for each week, etc.). We will individually research which project type we prefer to work on. Finally we will decide on the type of project. |
| 0 | Submitting Phase 0 | Martin | Courtney, Kelly, Martin, Vicente | We will meet and Martin will submit the Phase 0 assignment while sharing his screen. |
| 0 | Google Colab testing | Vicente | Martin, Vicente | We will test how to use Google Colab. We will test sharing data between an instance of Google Colab notebook and shared Google Drive directory. We will research paid versions of Google Colab which include GPU for future project phases. |
| 1 | Research importing tables into Colab | Vicente | Vicente | We will research and test inserting data for Leaders, Credit, and Goals as tables into a testing Google Colab notebook for proposal for Phase 1. |
| 1 | Test running notebooks on our systems | Martin | Courtney, Kelly, Martin, Vicente | Since training models in paid Colab can become expensive with time, we will ensure we all can run our code both in Google Colab and locally on our workstations (this will be especially interesting with 3 different OS types and x64 and Apple M CPUs) . We will verify if the provided Docker container is suitable for running CPU/GPU intensive notebooks or if we will need to use Jupyter notebooks directly on our workstations. |
| 1 | Create goals in Google Sheets | Martin | Martin | We will finish scoping goals in Google Doc and will move them into Google Sheets file. We will ensure they are presentable for Leaders, Credit, and Goal sections in the proposal for Phase 1. |
| 1 | Provide a baseline coding notebook (local and Colab) | Courtney | Courtney | We will have a starting code for testing both local Jupyter notebook and Google Colab. |
| 1 | Baseline classification pipeline (SKLearn) | Vicente | Vicente, Courtney | We will research a baseline classification pipeline in SKLearn. We will also describe it for the proposal in Phase 1. |
| 1 | Baseline regression pipeline (SKLearn) | Martin | Martin | We will research a baseline regression pipeline in SKLearn. We will also describe it for the proposal in Phase 1. |
| 1 | Create Colab notebook for Phase 1 submission | Courtney | Kelly, Courtney | We will create a Google Colab notebook for the Phase 1 submission. It should include current requirements from the Phase 1 Assignment page. |
| 1 | Create baseline pipeline | Vicente | Vicente | We will put together the previous pipeline tasks and will follow up with TAs if the "baseline pipeline" requirement is satisfied. |
| 1 | Create diagrams and graphs | Vicente | Vicente, Martin | We will create diagrams describing the pipelines for the proposal. We will create several graphs describing data. |
| 1 | Create Gantt Chart | Kelly | Kelly | Using matplotlib we will create a Gantt chart based on the TeamTable spreadsheet. If the automated graph generation will not work, we will use Excel or some other tool and take a screenshot of the result. |
| 1 | Write an abstract | Kelly | Kelly | We will write an abstract for the proposal. |
| 1 | Prepare data description and EDA | Kelly | Kelly, Martin | We will describe data, provide basic EDA, and include previously generated graphs. |
| 1 | Pipeline and ML description | Courtney | Courtney | Describe algorithms and their implementations, metrics, loss functions and equations |
| 1 | Create list of previous meetings in Google Sheets | Martin | Courtney | We will have a list of previous meetings in Google Sheets. We should keep the list updated with any future meetings. |
| 1 | Insert data from Google Sheets into Google Colab | Martin | Martin, Vicente | We will insert the required proposal sections (Leaders, Credits, and Goals) from their sections in the proposal in a table form. |
| 1 | Submit the Phase 1 proposal into the discussion | Vicente | Vicente | We will create a PDF for the discussion (part of Phase 1 requirement). Since it is a Canvas discussion and it is supposed to be submitted only by one person from the team, we might want to do this together and the person submitting the discussion will be screen sharing with others since others will not have access to it. |
| 1 | Submit Phase 1 assignment | Vicente | Vicente | We will submit the Phase 1 assignment before the deadline. |
| 2 | Test custom GCE VM for Colab | Martin | Martin, Vicente | Since we decided to gain more experience with running our code in the cloud, we will create a custom dedicated GCE virtual machine for Google Colab and compare its performance with Google Colab Pro version. We will need to ensure we can still save data into our shared directory in Google Drive. |
| 2 | EDA and data metrics | Kelly | Kelly, Martin | We will create more detailed EDA and data metrics for the Phase 2 assignment. |
| 2 | Create baseline pipeline for classification and regression | Vicente | Kelly, Vicente | We will use the baseline pipelines designed in the Phase 1 and describe them in a brief report. |
| 2 | Research pre-processing (grayscale and HOG) | Vicente | Vicente | We will research using grayscale and HOG and their potential for pre-processing and feature engineering. |
| 2 | Feature engineering and selection research | Courtney | Vicente | We will finalize our feature engineering and selection and will discuss with TA if our approach makes sense. |
| 2 | Create classification pipeline (SKLearn) | Courtney and Vicente | Vicente, Martin | We will build an image classification model (using a pipeline and SKLearn). |
| 2 | Create regression pipeline (SKLearn) | Kelly, Vicente and Martin | Kelly, Courtney | We will build a regression model (using a pipeline and SKLearn) with 4 target values [y_1, y_2, y_3, y_4] corresponding to the bounding box containing the object of interest. |
| 2 | Hyperparameter tuning | Martin | Martin, Vicente, Kelly | We will do a hyperparameter tuning in a pipeline form for both classification and regression models. |
| 2 | Homegrown detector pipeline research | Courtney | Courtney | If we still have time available (if not, we will be addressing this in PyTorch in the next phase). We will implement a homegrown linear regression model with 4 target values. The MSE loss function will be extended from 1 to 4 targets (based on the coordinates of the bounding box). |
| 2 | Create Colab notebook for Phase 2 submission | Courtney | Courtney | If we still have time available (if not, we will be addressing this in PyTorch in the next phase). Based on the previous stretch goal, we will implement a homegrown logistic regression mode which will use a combination of CXE and MSE as a multi-task loss function where the resulting model predicts class and coordinates of the bounding box. |
| 2 | Prepare, create, and record a presentation for Phase 2 | Kelly | Courtney, Kelly, Martin, Vicente | We will prepare a slide deck for presentation (under 300 words), review it, record it on Zoom (under 2 minutes long), and share it according to the requirements. |
| 2 | Submit Phase 2 presentation to the discussion board | Kelly | Kelly | We will submit the presentation and any additional requirements to the discussion board. |
| 2 | Submit Phase 2 assignment | Kelly | Kelly | We will submit the Phase 2 assignment before the deadline. |
| 2 | Learn and gain familiarity with pytorch functions | Courtney | Courtney, Martin | We will continue learning PyTorch, especially functions within PyTorch. |
| 2 | Research CNN | Courtney | Kelly | If we have time, we will research how we could use CNN in SKLearn and PyTorch. |
| 2 | Research pytorch model with MLP | Courtney | Courtney | If we have time, we will research how to use pytorch models using a MLP |
| 3 | AlexNet classification | Courtney | Courtney | We will build a classification model using AlexNet. |
| 3 | PyTorch digit detector classification model | Vicente | Vicente | We will build a PyTorch classification model using a multilayer perceptron. |
| 3 | PyTorch object detector regression model | Kelly | Kelly | We will build a PyTorch regression model using a multilayer perceptron with 4 target values [y_1, y_2, y_3, y_4] corresponding to the bounding box containing the object of interest. |
| 3 | Multi-headed object detector | Martin | Martin | We will build a multi-headed cat-dog detector using the OOP API in PyTorch with a combined loss function: CXE + MSE. |
| 3 | CNN for classification | Martin | Martin | We will build a baseline pipeline in PyTorch to do an object classification and object localization (predict the bounding box that contains the main object of interest in the image). |
| 3 | Prepare, create, and record a presentation for Phase 3 | Courtney | Courtney, Kelly, Martin, Vicente | We will prepare a slide deck for presentation (under 300 words), review it, record it on Zoom (under 2 minutes long), and share it according to the requirements. |
| 3 | Submit Phase 3 presentation to the discussion board | Courtney | Courtney | We will submit the presentation and any additional requirements to the discussion board. |
| 3 | Submit Phase 3 assignment | Courtney | Courtney | We will submit the Phase 3 assignment before the deadline. |
| 4 | Research transfer learning | Vicente and Courtney | Kelly, Courtney | We will research transfer learning for object detection and fine-tune it using EfficientNet (D0-D7) for object detection. We will be able to describe architecture and loss function for EfficientDet. We will describe the architecture and loss functions of EfficientDet. We will describe differences between EfficientDet D0 and EfficientNet D7. |
| 4 | Implement transfer learning(EfficientNet) | Courtney Vicente | Kelly | We will implement transfer learning for object detection based on the previous research task. |
| 4 | Alexnet Improvements | Kelly and Martin | Courtney | Fine tuning AlexNet Model that was built in phase 3 with a goal of at least 70% accuracy |
| 4 | Build FCN | Martin | Martin | We will create a convolutional neural network (FCN) for a single object classifier and detector. |
| 4 | Build CNN | Vicente | Vicente | We will create a convolutional neural network (CNN) for image classification. |
| 4 | Prepare, create, and record a presentation for Phase 4 | Martin | Martin, Vicente, Kelly, Courtney | We will prepare a slide deck for presentation (under 300 words), review it, record it on Zoom (under 2 minutes long), and share it according to the requirements. |
| 4 | Submit Phase 4 presentation to the discussion board | Martin | Martin | We will submit the presentation and any additional requirements to the discussion board. |
| 4 | Submit Phase 4 | Martin | Martin | We will submit the Phase 4 assignment before the deadline. |
This chart represents our currently planed schedule which is based on our current best estimates and likely will change.
# Code to create dataframe for Gantt Chart table
'''
How to get a Gantt plot using matplotlib?:
https://www.tutorialspoint.com/how-to-get-a-gantt-plot-using-matplotlib
'''
gc = gspread.authorize(creds)
worksheet = gc.open('TeamTable').worksheet('Gantt_Chart') # Load only the "Gantt_Chart" sheet from the file
rows = worksheet.get_all_values() # get_all_values gives a list of rows
pd.set_option('display.max_colwidth', 0)
# Create dataframe
gantt = pd.DataFrame.from_records(rows[7:], columns=[rows[5]])
# Keep first 3 columns
gantt = gantt.iloc[:, 1:4]
# Give column names
gantt.columns = ['TASK','START DATE','END DATE']
# Drop Phase rows
index_num = gantt[(gantt['TASK'] == 'Phase 0')|(gantt['TASK'] == 'Phase 1')|(gantt['TASK'] == 'Phase 2')|(gantt['TASK'] == 'Phase 3')|(gantt['TASK'] == 'Phase 4')].index
gantt = gantt.drop(index_num)
# gantt
# Create a gantt chart
# convert to datetime
gantt['START DATE']= pd.to_datetime(gantt['START DATE'])
gantt['END DATE']= pd.to_datetime(gantt['END DATE'])
# Create new columns
gantt['days_to_start'] = (gantt['START DATE'] - gantt['START DATE'].min()).dt.days
gantt['days_to_end'] = (gantt['END DATE'] - gantt['START DATE'].min()).dt.days
gantt['task_duration'] = gantt['days_to_end'] - gantt['days_to_start'] + 1 # to include also the end date
gantt['Phase'] = ['Phase 0'] * 3 + ['Phase 1'] * 17 + ['Phase 2'] * 16 + ['Phase 3'] * 8 + ['Phase 4'] * 8
# Create Gantt Chart
team_colors = {'Phase 0': 'g', 'Phase 1': 'c', 'Phase 2': 'm', 'Phase 3': 'y', 'Phase 4': 'b'} # dictionary with the team names as its keys and base matplotlib colors as its values
fig, ax = plt.subplots()
fig.set_size_inches(12, 18)
for index, row in gantt.iterrows():
plt.barh(y=row['TASK'], width=row['task_duration'], left=row['days_to_start'], color=team_colors[row['Phase']])
plt.title('Final Project Gantt Chart', fontsize=15)
plt.gca().invert_yaxis()
xticks = np.arange(0, gantt['days_to_end'].max()+2, 2)
xticklabels = pd.date_range(start=gantt['START DATE'].min(), end=gantt['END DATE'].max()).strftime("%d/%m")
# ticks
ax.set_xticks(xticks)
ax.set_xticklabels(xticklabels[::2])
# axis
ax.xaxis.grid(True, alpha=0.5)
# Adding a legend
patches = []
for team in team_colors:
patches.append(matplotlib.patches.Patch(color=team_colors[team]))
ax.legend(handles=patches, labels=team_colors.keys(), fontsize=11)
plt.show()
The goal of this project is to create optimal cat and dog image detection models. In the first phase of this project, we created baseline SKLearn pipelines using classification models to predict cat or dog labels and regression models to predict the image bounding box. In phase 2 of this project, we utilized pytorch to create baseline classification and regression neural networks. In addition, we implemented hyperparameter tuning, gray scale, image augmentation and HOG feature engineering steps to improve our models performance.In phase 3, we continued using image augmentation and we constructed multilayer perceptron models for classification and regression using Pytorch. In the final phase of this project, we created complex neural networks such as Convolutional Neural Network and Fully Convolutional Network using Keras and Tensorflow. In addition, we implemented transfer learning with AlexNet and EfficientNet. EfficientNetB5 was our top performing model with a test accuracy score of 99.4%.
In the fourth and final phase of our project, we planned to utilize our successes and failures from prior submissions as a learning benchmark in order to improve the overall cat and dog image detection goal. We hoped to achieve accuracy scores above 70% and far surpassed that metric. Within this phase, we will be implementing several classification methods including a convolutional neural network(CNN), a fully convolutional neural network (FCN) for single object detection and classification, and methods of transfer learning utilizing EfficientNet(0-5) and Alexnet.
CNN: Pictured below is the architecture that will be used to implement the CNN model for object detection. This model will be built with an input structure size of (128,128,3) via a reshape method. Augmentation and preprocessing steps will be applied to prevent overfitting and the model will be fine tuned through testing results of different optimizers of Adam vs. RMSprop. Results will be displayed using tensorboard.
from IPython.display import Image, display
colab_path = '/content/drive/MyDrive/aml/'
display(Image(colab_path + 'Pictures/CNN_model.png'))
FCN: As seen below, the input size for the fully convolutional neural network(FCN) will be (128,128,3). FCN will be built similarly to the CNN model in terms of augmentation and preprocessing. This model will be fine tuned by testing optimizers of stochastic gradient descent (SGD) and Adam, adjusting learning rate, and batch sizes until best results are achieved. Results will be displayed using tensorboard.
display(Image(colab_path + 'Pictures/FCN_model.png'))
EfficientNet(0-5): For the Efficient family of models, the input parameters (such as width, depth, and resolution) increase from EfficientNetB0 to EfficientNetB7. The image resolution will increase from 224 for EfficientNetB0 to an image resolution of 600 for EfficientNetB7. RMSprop optimizer will be implemented resulting in an increase of parameters with each application. The architectural framework can be seen below.
display(Image(colab_path + 'Pictures/EN_Model.png'))
Alexnet: Seen below we can visualize the architectural network utilized in the Alexnet model framework. The goal for this phase was to improve results in the Alexnet model from phase 3 by applying the reshape method to change the input to the size required for Alexnet. This will be done by utilizing a different data set, defining a reshape model, and fine tuning with different optimizers and learning rates until optimal results are achieved. Overall, this should receive better accuracy and loss results along with increasing the number of epochs.
display(Image(colab_path + 'Pictures/Alex_Model.png'))
Accuracy Equation: $${Accuracy} = \dfrac {c}{n} $$
Notation: \begin{align} c &\quad \text{Number of correct predictions}\\ n &\quad \text{Total number of predictions}\\ \end{align}
MSE Equation: Loss function that works for multiple alogorithms including logistic regression
\begin{equation}\tag{Reference: Lab 4 Linear Regression} MSE(\boldsymbol{\theta}) = f(\boldsymbol{\theta}) = \frac{1}{m}\sum_{i=1}^{m}\left[ \mathbf{x}_i\cdot\boldsymbol{\theta} - y_i\right]^2 \end{equation}Notation: \begin{align} m &\quad \text{Number of examples in data set}\\ \mathbf{x}_i &\quad \text{input variable}\\ (\boldsymbol{\theta}) &\quad \text{model's parameter vector}\\ y &\quad \text{target class}\\ \end{align}
RMSE Equation Root mean squared error function is just the square root of the mean squared error
\begin{align}\tag{Reference: Lab 4 Linear Regression} \text{RMSE}(\mathbf{X}, h_{\mathbf{\theta}}) = \sqrt{\dfrac{1}{m} \sum\limits_{i=1}^{m}{( \mathbf{x}^{(i)}\cdot \mathbf{\theta} - y^{(i)})^2}} \end{align}Notation: \begin{align} m &\quad \text{Number of examples in data set}\\ \mathbf{x}_i &\quad \text{input variable}\\ (\boldsymbol{\theta}) &\quad \text{model's parameter vector}\\ y &\quad \text{target class}\\ \end{align}
MAE Equation \begin{align}\tag{Reference: Lab 4 Linear Regression} \text{MAE}(\mathbf{X}, h_{\mathbf{\theta}}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{| \mathbf{x}^{(i)}\cdot \mathbf{\theta} - y^{(i)}|} \end{align}
Notation:
\begin{align} m &\quad \text{Number of examples in data set}\\ \mathbf{x}_i &\quad \text{input variable}\\ (\boldsymbol{\theta}) &\quad \text{model's parameter vector}\\ y &\quad \text{target class}\\ \end{align}BCE Equation: Binary Cross Entropy loss function \begin{align} \tag{Reference: Lab 7 Logistic Regression} \mathcal{L}_{\text{BCE}}(y, \hat{y}) = -\frac{1}{N}\sum_{i=1}^{N} y_i \log(\hat{y_i}) + (1 - y_i) \log(1 - \hat{y_i}) \end{align} Notation: \begin{align} N &\quad \text{number of classes}\\ y &\quad \text{target class}\\ \hat{y} &\quad \text{predicted classes}\\ \end{align}
Confusion Matrix Equations For confusion matrices, we will be using precision and F-1 score
Notation: \begin{align} TP &\quad \text{True Positives in matrix}\\ TN &\quad \text{True Negatives in matrix}\\ FP &\quad \text{False Positives in matrix}\\ FN &\quad \text{False Negatives in matrix}\\ \end{align}
Precision
\begin{align} \text{Precison} = \dfrac{TP}{TP+FP} \end{align}F1 Score
\begin{align} \text{F1 Score} = 2\cdot\dfrac{Precision⋅Recall}{Precision+Recall} \end{align}Where: \begin{align} \text{Recall} = \dfrac{TP}{TP+FN} \end{align}
The data set for this project is a subset of Open Images v6 and it contains 12,966 RGB images of cats and dogs with various shapes and aspect ratios. In addition, data containing image bounding box coordinates of the object are stored in a .csv file. There are 6,855 data points classified as dogs and 6,111 as cats. The image bounding box file contains the following information:
The code below reads the dataset and loads it into memory.
# Loading Variables
# These 2 lines below can be changed based on what you want to run:
dataset = 'cadod.csv' # cadod.csv or cadod_experimental.csv
gdrive_directory = 'aml' # This is your personal GDrive directly. For example in Vicente's case: MLProject
################################
# You do not need to edit these variables
colab_path = '/content/drive/MyDrive/' + gdrive_directory + '/aml'
images_path = colab_path + '/images'
# images_resized_path = images_path + '/resized'
# df = pd.read_csv(colab_path + '/' + dataset)
df = pd.read_csv(colab_path + '/' + dataset)
# Image Bounding Box File
# List the first 5, last 5, and random 5 rows from the Bounding Box File. This gives us a quick overview of the data.
display(df.head())
print()
display(df.tail())
print()
display(df.sample(n=5))
| ImageID | Source | LabelName | Confidence | XMin | XMax | YMin | YMax | IsOccluded | IsTruncated | ... | IsDepiction | IsInside | XClick1X | XClick2X | XClick3X | XClick4X | XClick1Y | XClick2Y | XClick3Y | XClick4Y | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0 | 0000b9fcba019d36 | xclick | /m/0bt9lr | 1 | 0.165000 | 0.903750 | 0.268333 | 0.998333 | 1 | 1 | ... | 0 | 0 | 0.636250 | 0.903750 | 0.748750 | 0.165000 | 0.268333 | 0.506667 | 0.998333 | 0.661667 |
| 1 | 0000cb13febe0138 | xclick | /m/0bt9lr | 1 | 0.000000 | 0.651875 | 0.000000 | 0.999062 | 1 | 1 | ... | 0 | 0 | 0.312500 | 0.000000 | 0.317500 | 0.651875 | 0.000000 | 0.410882 | 0.999062 | 0.999062 |
| 2 | 0005a9520eb22c19 | xclick | /m/0bt9lr | 1 | 0.094167 | 0.611667 | 0.055626 | 0.998736 | 1 | 1 | ... | 0 | 0 | 0.487500 | 0.611667 | 0.243333 | 0.094167 | 0.055626 | 0.226296 | 0.998736 | 0.305942 |
| 3 | 0006303f02219b07 | xclick | /m/0bt9lr | 1 | 0.000000 | 0.999219 | 0.000000 | 0.998824 | 1 | 1 | ... | 0 | 0 | 0.508594 | 0.999219 | 0.000000 | 0.478906 | 0.000000 | 0.375294 | 0.720000 | 0.998824 |
| 4 | 00064d23bf997652 | xclick | /m/0bt9lr | 1 | 0.240938 | 0.906183 | 0.000000 | 0.694286 | 0 | 0 | ... | 0 | 0 | 0.678038 | 0.906183 | 0.240938 | 0.522388 | 0.000000 | 0.370000 | 0.424286 | 0.694286 |
5 rows × 21 columns
| ImageID | Source | LabelName | Confidence | XMin | XMax | YMin | YMax | IsOccluded | IsTruncated | ... | IsDepiction | IsInside | XClick1X | XClick2X | XClick3X | XClick4X | XClick1Y | XClick2Y | XClick3Y | XClick4Y | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 12961 | ffc65ce637cbd73d | xclick | /m/01yrx | 1 | 0.135000 | 0.844375 | 0.000000 | 0.999167 | 0 | 1 | ... | 0 | 0 | 0.176875 | 0.135000 | 0.450625 | 0.844375 | 0.000000 | 0.685000 | 0.999167 | 0.995000 |
| 12962 | ffd1e6a91d92bf83 | xclick | /m/01yrx | 1 | 0.000625 | 0.999375 | 0.005000 | 0.825000 | 1 | 1 | ... | 0 | 0 | 0.220625 | 0.125000 | 0.000625 | 0.999375 | 0.825000 | 0.005000 | 0.261667 | 0.303333 |
| 12963 | ffe91ea1debeefb3 | xclick | /m/01yrx | 1 | 0.001475 | 0.988201 | 0.042406 | 0.624260 | 1 | 1 | ... | 0 | 0 | 0.473451 | 0.001475 | 0.019174 | 0.988201 | 0.042406 | 0.327416 | 0.624260 | 0.358974 |
| 12964 | ffebb214b9df34aa | xclick | /m/01yrx | 1 | 0.000000 | 0.998125 | 0.037523 | 0.999062 | 0 | 1 | ... | 0 | 0 | 0.000000 | 0.399375 | 0.998125 | 0.581250 | 0.676360 | 0.037523 | 0.560976 | 0.999062 |
| 12965 | fffcbea446a0b7b9 | xclick | /m/01yrx | 1 | 0.148045 | 0.999069 | 0.070640 | 0.947020 | 0 | 1 | ... | 0 | 0 | 0.558659 | 0.148045 | 0.808194 | 0.999069 | 0.070640 | 0.370861 | 0.947020 | 0.835541 |
5 rows × 21 columns
| ImageID | Source | LabelName | Confidence | XMin | XMax | YMin | YMax | IsOccluded | IsTruncated | ... | IsDepiction | IsInside | XClick1X | XClick2X | XClick3X | XClick4X | XClick1Y | XClick2Y | XClick3Y | XClick4Y | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 4337 | 9d22da1e9caa418e | xclick | /m/0bt9lr | 1 | 0.000000 | 0.893750 | 0.000000 | 0.999163 | 0 | 1 | ... | 0 | 0 | 0.120625 | 0.000000 | 0.893750 | 0.561875 | 0.000000 | 0.184937 | 0.897908 | 0.999163 |
| 5185 | bc9770bf4c2d4121 | xclick | /m/0bt9lr | 1 | 0.001250 | 0.915625 | 0.000000 | 0.868333 | 0 | 1 | ... | 0 | 0 | 0.290625 | 0.001250 | 0.860625 | 0.915625 | 0.000000 | 0.527500 | 0.868333 | 0.746667 |
| 6979 | 0122ed068d6d0d40 | xclick | /m/01yrx | 1 | 0.181250 | 0.880625 | 0.067500 | 0.902500 | 0 | 0 | ... | 0 | 0 | 0.307500 | 0.880625 | 0.181250 | 0.367500 | 0.067500 | 0.504167 | 0.395000 | 0.902500 |
| 12797 | f7e71b714f18a026 | xclick | /m/01yrx | 1 | 0.317647 | 0.998824 | 0.000000 | 0.998230 | 0 | 1 | ... | 0 | 0 | 0.748235 | 0.998824 | 0.830588 | 0.317647 | 0.000000 | 0.495575 | 0.998230 | 0.631858 |
| 11892 | ce6ee11659b2f914 | xclick | /m/01yrx | 1 | 0.181250 | 0.940625 | 0.060417 | 0.997917 | 1 | 1 | ... | 0 | 0 | 0.206250 | 0.181250 | 0.496875 | 0.940625 | 0.060417 | 0.600000 | 0.997917 | 0.668750 |
5 rows × 21 columns
# Descriptive statistics
df.describe()
| Confidence | XMin | XMax | YMin | YMax | IsOccluded | IsTruncated | IsGroupOf | IsDepiction | IsInside | XClick1X | XClick2X | XClick3X | XClick4X | XClick1Y | XClick2Y | XClick3Y | XClick4Y | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| count | 12966.0 | 12966.000000 | 12966.000000 | 12966.000000 | 12966.000000 | 12966.000000 | 12966.000000 | 12966.000000 | 12966.000000 | 12966.000000 | 12966.000000 | 12966.000000 | 12966.000000 | 12966.000000 | 12966.000000 | 12966.000000 | 12966.000000 | 12966.000000 |
| mean | 1.0 | 0.099437 | 0.901750 | 0.088877 | 0.945022 | 0.464754 | 0.738470 | 0.013651 | 0.045427 | 0.001157 | 0.390356 | 0.424582 | 0.494143 | 0.506689 | 0.275434 | 0.447448 | 0.641749 | 0.582910 |
| std | 0.0 | 0.113023 | 0.111468 | 0.097345 | 0.081500 | 0.499239 | 0.440011 | 0.118019 | 0.209354 | 0.040229 | 0.358313 | 0.441751 | 0.405033 | 0.462281 | 0.415511 | 0.401580 | 0.448054 | 0.403454 |
| min | 1.0 | 0.000000 | 0.408125 | 0.000000 | 0.451389 | -1.000000 | -1.000000 | -1.000000 | -1.000000 | -1.000000 | -1.000000 | -1.000000 | -1.000000 | -1.000000 | -1.000000 | -1.000000 | -1.000000 | -1.000000 |
| 25% | 1.0 | 0.000000 | 0.830625 | 0.000000 | 0.910000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.221292 | 0.096875 | 0.285071 | 0.130000 | 0.024323 | 0.218333 | 0.405816 | 0.400000 |
| 50% | 1.0 | 0.061250 | 0.941682 | 0.059695 | 0.996875 | 0.000000 | 1.000000 | 0.000000 | 0.000000 | 0.000000 | 0.435625 | 0.415625 | 0.531919 | 0.623437 | 0.146319 | 0.480838 | 0.825000 | 0.646667 |
| 75% | 1.0 | 0.167500 | 0.998889 | 0.144853 | 0.999062 | 1.000000 | 1.000000 | 0.000000 | 0.000000 | 0.000000 | 0.609995 | 0.820000 | 0.787500 | 0.917529 | 0.561323 | 0.729069 | 0.998042 | 0.882500 |
| max | 1.0 | 0.592500 | 1.000000 | 0.587088 | 1.000000 | 1.000000 | 1.000000 | 1.000000 | 1.000000 | 1.000000 | 0.999375 | 0.999375 | 1.000000 | 0.999375 | 0.999375 | 0.999375 | 1.000000 | 0.999375 |
# Print a concise summary of the bounding box file.
df.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 12966 entries, 0 to 12965 Data columns (total 21 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 ImageID 12966 non-null object 1 Source 12966 non-null object 2 LabelName 12966 non-null object 3 Confidence 12966 non-null int64 4 XMin 12966 non-null float64 5 XMax 12966 non-null float64 6 YMin 12966 non-null float64 7 YMax 12966 non-null float64 8 IsOccluded 12966 non-null int64 9 IsTruncated 12966 non-null int64 10 IsGroupOf 12966 non-null int64 11 IsDepiction 12966 non-null int64 12 IsInside 12966 non-null int64 13 XClick1X 12966 non-null float64 14 XClick2X 12966 non-null float64 15 XClick3X 12966 non-null float64 16 XClick4X 12966 non-null float64 17 XClick1Y 12966 non-null float64 18 XClick2Y 12966 non-null float64 19 XClick3Y 12966 non-null float64 20 XClick4Y 12966 non-null float64 dtypes: float64(12), int64(6), object(3) memory usage: 2.1+ MB
# Check for any NA values
df.isnull().sum()
ImageID 0 Source 0 LabelName 0 Confidence 0 XMin 0 XMax 0 YMin 0 YMax 0 IsOccluded 0 IsTruncated 0 IsGroupOf 0 IsDepiction 0 IsInside 0 XClick1X 0 XClick2X 0 XClick3X 0 XClick4X 0 XClick1Y 0 XClick2Y 0 XClick3Y 0 XClick4Y 0 dtype: int64
# Check data type of each column
df.dtypes
ImageID object Source object LabelName object Confidence int64 XMin float64 XMax float64 YMin float64 YMax float64 IsOccluded int64 IsTruncated int64 IsGroupOf int64 IsDepiction int64 IsInside int64 XClick1X float64 XClick2X float64 XClick3X float64 XClick4X float64 XClick1Y float64 XClick2Y float64 XClick3Y float64 XClick4Y float64 dtype: object
# List how much memory each column uses in bytes.
# This could be useful if we were working with truly large data sets and memory would be a constrain.
df.memory_usage()
Index 128 ImageID 103728 Source 103728 LabelName 103728 Confidence 103728 XMin 103728 XMax 103728 YMin 103728 YMax 103728 IsOccluded 103728 IsTruncated 103728 IsGroupOf 103728 IsDepiction 103728 IsInside 103728 XClick1X 103728 XClick2X 103728 XClick3X 103728 XClick4X 103728 XClick1Y 103728 XClick2Y 103728 XClick3Y 103728 XClick4Y 103728 dtype: int64
# List counts of unique values in each column. This is useful for determining if the variable is categorical.
df.nunique()
ImageID 12966 Source 2 LabelName 2 Confidence 1 XMin 3109 XMax 3499 YMin 4822 YMax 4219 IsOccluded 3 IsTruncated 3 IsGroupOf 3 IsDepiction 3 IsInside 3 XClick1X 5135 XClick2X 4763 XClick3X 5074 XClick4X 4659 XClick1Y 6351 XClick2Y 7314 XClick3Y 6252 XClick4Y 7180 dtype: int64
# Get the rows and columns of the bounding box data file.
print(f"There are {df.shape[0]} rows and {df.shape[1]} columns")
There are 12966 rows and 21 columns
# Number of dog vs cat labels.
# The counts of cats and dogs was close enough that we concluded not to do any sample balancing.
df.LabelName.replace({'/m/01yrx':'cat', '/m/0bt9lr':'dog'}, inplace=True)
df.LabelName.value_counts()
dog 6855 cat 6111 Name: LabelName, dtype: int64
# plot random 6 images
fig, ax = plt.subplots(nrows=2, ncols=3, sharex=False, sharey=False,figsize=(15,10))
ax = ax.flatten()
for i,j in enumerate(np.random.choice(df.shape[0], size=6, replace=False)):
img = mpimg.imread(images_path + "/" + df.ImageID.values[j] + '.jpg')
h, w = img.shape[:2]
coords = df.iloc[j,4:8]
ax[i].imshow(img)
ax[i].set_title(df.LabelName[j])
ax[i].add_patch(plt.Rectangle((coords[0]*w, coords[2]*h),
coords[1]*w-coords[0]*w, coords[3]*h-coords[2]*h,
edgecolor='red', facecolor='none'))
plt.tight_layout()
plt.show()
This is a bar plot of the all the image shape counts. The "other" category includes all of the image shapes that have a count of less than 100. 512x384 is the image size that occurs the most.
#Plot image shape counts
img_df.sort_values('img_count', inplace=True)
img_df.plot(x='img_shape', y='img_count', kind='barh', figsize=(8,8), legend=False)
plt.title('Image Shape Counts')
plt.show()
# convert to megabytes
img_size = img_size / 1000
This is a histogram and box plot of image size (MB).
# Plot image size distribution
fig, ax = plt.subplots(1, 2, figsize=(15,5))
fig.suptitle('Image Size Distribution')
ax[0].hist(img_size, bins=50)
ax[0].set_title('Histogram')
ax[0].set_xlabel('Image Size (MB)')
ax[1].boxplot(img_size, vert=False, widths=0.5)
ax[1].set_title('Boxplot')
ax[1].set_xlabel('Image Size (MB)')
ax[1].set_ylabel('Images')
plt.show()
colab_path = '/content/drive/MyDrive/MLProject/aml/data/martin'
X = np.load(colab_path + '/img.npy', allow_pickle=True)
y_label = np.load(colab_path + '/y_label.npy', allow_pickle=True)
y_bbox = np.load(colab_path + '/y_bbox.npy', allow_pickle=True)
TensorBoard
import tensorflow as tf
import datetime
!pip install -q -U tensorboard
%load_ext tensorboard
Keras
!pip install livelossplot
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout
from tensorflow.keras.utils import plot_model
from livelossplot import PlotLossesKeras
from keras.callbacks import CSVLogger
from tensorflow.keras.callbacks import TensorBoard
Numpy, Pandas, Sklearn, Google Colab
import csv
import numpy as np
import random
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import os
import pandas as pd
from PIL import Image
from sklearn.exceptions import ConvergenceWarning
from sklearn.metrics import accuracy_score
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay
from sklearn.model_selection import train_test_split
import warnings
warnings.filterwarnings("ignore")
from google.colab import auth
from google.auth import default
from google.colab import drive
import gspread
auth.authenticate_user()
creds, _ = default()
gc = gspread.authorize(creds)
drive.mount('/content/drive')
Mounted at /content/drive
#colab_path = '/content/drive/MyDrive/MLProject/aml/data/martin'
#X = np.load(colab_path + '/img.npy', allow_pickle=True)
#y_label = np.load(colab_path + '/y_label.npy', allow_pickle=True)
#y_bbox = np.load(colab_path + '/y_bbox.npy', allow_pickle=True)
X_train, X_test, y_train, y_test_label = train_test_split(X, y_label, test_size=0.2, random_state=42)
X_train, X_valid, y_train, y_valid = train_test_split(X_train, y_train, test_size=0.2, random_state=42)
X_train.shape # lets check the shape
(8297, 49152)
X_test.shape # lets check the shape
(2594, 49152)
# https://www.w3schools.com/python/gloss_python_check_if_set_item_exists.asp
if set(y_train) == {0, 1}: # labels 0 and 1
print("Labels are 0 and 1")
else:
print("Labels are not 0 and 1")
Labels are 0 and 1
X_train = X_train.reshape((-1, 128, 128, 3)) # reshaping as before, but using channel 3 for RGB
X_valid = X_valid.reshape((-1, 128, 128, 3)) # reshaping as before, but using channel 3 for RGB
X_test = X_test.reshape((-1, 128, 128, 3)) # reshaping as before, but using channel 3 for RGB
num_classes = len(set(y_label))
y_train = to_categorical(y_train, num_classes)
y_valid = to_categorical(y_valid, num_classes)
y_test = to_categorical(y_test_label, num_classes)
Our data augmentation processes are a mix of the 3 sources below. We are setting up Keras ImageDataGenerator for the Training, Validation, and Testing set. We are applying a couple of basic data augmentation techniques to create data batches on the training set including normalization (rescale), rotation, shift (width and height), share range, zoom, horizontal flip, and fill mode. Both the Validation and Test sets are just being normalized. Keras ImageDataGenerator, '.fit()', and '.flow()' are used to set up the dataset.
# https://www.geeksforgeeks.org/python-data-augmentation/
# https://gsurma.medium.com/image-classifier-cats-vs-dogs-with-convolutional-neural-networks-cnns-and-google-colabs-4e9af21ae7a8
# https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image/ImageDataGenerator
train_generator = ImageDataGenerator(
rescale=1./255,
rotation_range=30,
width_shift_range=0.1,
height_shift_range=0.1,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest') # Will determine how the pixels are generator
valid_generator = ImageDataGenerator(rescale=1./255) # normalization
test_generator = ImageDataGenerator(rescale=1./255) # normalization
'''
We tried to use the code below for data augmentation techniques, however we found that the above code returns better results.
'''
#train_generator = ImageDataGenerator(
#rescale=1./255,
#rotation_range=45,
#width_shift_range=0.2,
#height_shift_range=0.2,
#shear_range=0.3,
#zoom_range=0.3,
#horizontal_flip=True,
#fill_mode='nearest',
#brightness_range=[0.5, 1.5])
#valid_generator = ImageDataGenerator(rescale=1./255)
#test_generator = ImageDataGenerator(rescale=1./255)
train_generator.fit(X_train) # applying fit
valid_generator.fit(X_valid) # applying fit
test_generator.fit(X_test) # applying fit
# https://studymachinelearning.com/keras-imagedatagenerator-with-flow/
batch = 32 # Keras batch size default.
train_generator = train_generator.flow(X_train, y_train, batch_size=batch) # applying Keras .flow()
valid_generator = valid_generator.flow(X_valid, y_valid, batch_size=batch) # applying Keras .flow()
test_generator = test_generator.flow(X_test, y_test, batch_size=batch) # applying Keras .flow()
The following CNN Model came from the tutorial provided below. This Model uses Keras Sequential API and had many layers like Conv2D (convolutional layers), MaxPooling2D (max pooling for 2D spatial data), and Dense (fully connected). It is important to state that our input size will be 128x128x3, just like we reshaped our data initially. It also consists of:
Relu Activation Function (Rectified Activation Function): With default values, this returns the standard ReLU activation: max(x, 0), the element-wise maximum of 0 and the input tensor.
$$ReLU(x) = (x)^+ = \max(0, x)$$Softmax: Used for the classification network. It is mostly used for the last layer of a classification network.
$$\text{softmax }(\mathbf{x}) = \frac{e^{x_i}}{\sum_j e^{x_j}}$$Binary Cross Entropy: Computes the cross-entropy loss between true labels and predicted labels.
$$\mathcal{L}_{\text{BCE}}(y, \hat{y}) = -\frac{1}{N}\sum_{i=1}^{N} y_i \log(\hat{y_i}) + (1 - y_i) \log(1 - \hat{y_i})$$We decided to use two dimensional convolutional layers to try and extract the most relevant features from the images. We do this by applying filters to detect and specific feature in the image. The max pooling layers will reduce dimensionality while retaining the most important features as well. The flatten layer is used to convert features into 1 dimensional vector (used specifically for classification tasks). Dense layers perform classification tasks by computing weighted sum of the input images while applying the activation function, in this case ReLU. The final dense layer uses SoftMax activation function for probability purposes.
Sources:
from tensorflow.keras.optimizers import Adam # Optimizer
#from tensorflow.keras.optimizers import RMSprop
'''
Model from tutorial:
https://gsurma.medium.com/image-classifier-cats-vs-dogs-with-convolutional-neural-networks-cnns-and-google-colabs-4e9af21ae7a8
'''
model = Sequential()
model.add(Conv2D(32, (3, 3), padding='same', input_shape=(128, 128, 3), activation='relu'))
model.add(Conv2D(32, (3, 3), padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, (3, 3), padding='same', activation='relu'))
model.add(Conv2D(64, (3, 3), padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(128, (3, 3), padding='same', activation='relu'))
model.add(Conv2D(128, (3, 3), padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(256, (3, 3), padding='same', activation='relu'))
model.add(Conv2D(256, (3, 3), padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss='binary_crossentropy',
optimizer=Adam(learning_rate=0.0001),
metrics=['accuracy'])
# https://machinelearningmastery.com/visualize-deep-learning-neural-network-model-keras/
model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 128, 128, 32) 896
conv2d_1 (Conv2D) (None, 128, 128, 32) 9248
max_pooling2d (MaxPooling2D (None, 64, 64, 32) 0
)
conv2d_2 (Conv2D) (None, 64, 64, 64) 18496
conv2d_3 (Conv2D) (None, 64, 64, 64) 36928
max_pooling2d_1 (MaxPooling (None, 32, 32, 64) 0
2D)
conv2d_4 (Conv2D) (None, 32, 32, 128) 73856
conv2d_5 (Conv2D) (None, 32, 32, 128) 147584
max_pooling2d_2 (MaxPooling (None, 16, 16, 128) 0
2D)
conv2d_6 (Conv2D) (None, 16, 16, 256) 295168
conv2d_7 (Conv2D) (None, 16, 16, 256) 590080
max_pooling2d_3 (MaxPooling (None, 8, 8, 256) 0
2D)
flatten (Flatten) (None, 16384) 0
dense (Dense) (None, 256) 4194560
dropout (Dropout) (None, 256) 0
dense_1 (Dense) (None, 256) 65792
dropout_1 (Dropout) (None, 256) 0
dense_2 (Dense) (None, 2) 514
=================================================================
Total params: 5,433,122
Trainable params: 5,433,122
Non-trainable params: 0
_________________________________________________________________
# https://machinelearningmastery.com/visualize-deep-learning-neural-network-model-keras/
plot_model(model, to_file='model.png', show_shapes=True)
Our training is going to consist of 25 epochs to see how far our CNN model can learn. Just like the tutorial, the training section produces a training log csv that we are going to use for dataframe creation as well as displaying the max values for accuracy and loss. For this task, we are going to visualize training results in real time using TensorBoard. It is worth mentioning that we took inspiration from other sources to incorporate ‘log_dir’ which gives us the time/date and ‘tensorboard_callback’ that helps us visualize results in TensorBoard. No special rule regarding epoch range, we wanted a range between 20 and 30 (even though we used 20 back in Phase 3).
# Delete any logs from previous runs
!rm -rf ./logs/
#https://gsurma.medium.com/image-classifier-cats-vs-dogs-with-convolutional-neural-networks-cnns-and-google-colabs-4e9af21ae7a8
#https://machinelearningmastery.com/display-deep-learning-model-training-history-in-keras/
#https://www.tensorflow.org/tensorboard/get_started
epochs = 25
csv_log = 'Adamtraining_logs.csv'
log_dir = "logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)
%tensorboard --logdir logs/fit
history = model.fit_generator(
train_generator,
steps_per_epoch=len(X_train) // batch,
epochs=epochs,
validation_data=valid_generator,
validation_steps=len(X_valid) // batch,
callbacks=[CSVLogger(csv_log, append=False, separator=';'), tensorboard_callback]
)
Epoch 1/25 259/259 [==============================] - 32s 122ms/step - loss: 0.6904 - accuracy: 0.5390 - val_loss: 0.6899 - val_accuracy: 0.5479 Epoch 2/25 259/259 [==============================] - 31s 119ms/step - loss: 0.6866 - accuracy: 0.5627 - val_loss: 0.6828 - val_accuracy: 0.5640 Epoch 3/25 259/259 [==============================] - 31s 118ms/step - loss: 0.6833 - accuracy: 0.5693 - val_loss: 0.6898 - val_accuracy: 0.5659 Epoch 4/25 259/259 [==============================] - 30s 117ms/step - loss: 0.6795 - accuracy: 0.5724 - val_loss: 0.6727 - val_accuracy: 0.5815 Epoch 5/25 259/259 [==============================] - 31s 118ms/step - loss: 0.6736 - accuracy: 0.5817 - val_loss: 0.6778 - val_accuracy: 0.5659 Epoch 6/25 259/259 [==============================] - 31s 119ms/step - loss: 0.6740 - accuracy: 0.5780 - val_loss: 0.6684 - val_accuracy: 0.6021 Epoch 7/25 259/259 [==============================] - 31s 119ms/step - loss: 0.6696 - accuracy: 0.5864 - val_loss: 0.6593 - val_accuracy: 0.6128 Epoch 8/25 259/259 [==============================] - 31s 119ms/step - loss: 0.6609 - accuracy: 0.6126 - val_loss: 0.6532 - val_accuracy: 0.6255 Epoch 9/25 259/259 [==============================] - 31s 121ms/step - loss: 0.6514 - accuracy: 0.6143 - val_loss: 0.6465 - val_accuracy: 0.6367 Epoch 10/25 259/259 [==============================] - 31s 118ms/step - loss: 0.6462 - accuracy: 0.6288 - val_loss: 0.6326 - val_accuracy: 0.6504 Epoch 11/25 259/259 [==============================] - 31s 120ms/step - loss: 0.6384 - accuracy: 0.6365 - val_loss: 0.6229 - val_accuracy: 0.6509 Epoch 12/25 259/259 [==============================] - 31s 120ms/step - loss: 0.6233 - accuracy: 0.6491 - val_loss: 0.6122 - val_accuracy: 0.6719 Epoch 13/25 259/259 [==============================] - 31s 121ms/step - loss: 0.6149 - accuracy: 0.6632 - val_loss: 0.5985 - val_accuracy: 0.6802 Epoch 14/25 259/259 [==============================] - 31s 121ms/step - loss: 0.6021 - accuracy: 0.6780 - val_loss: 0.5982 - val_accuracy: 0.6738 Epoch 15/25 259/259 [==============================] - 31s 121ms/step - loss: 0.5864 - accuracy: 0.6895 - val_loss: 0.5718 - val_accuracy: 0.7021 Epoch 16/25 259/259 [==============================] - 32s 124ms/step - loss: 0.5920 - accuracy: 0.6831 - val_loss: 0.5658 - val_accuracy: 0.7080 Epoch 17/25 259/259 [==============================] - 32s 122ms/step - loss: 0.5690 - accuracy: 0.7084 - val_loss: 0.5675 - val_accuracy: 0.7046 Epoch 18/25 259/259 [==============================] - 31s 121ms/step - loss: 0.5597 - accuracy: 0.7096 - val_loss: 0.5648 - val_accuracy: 0.7021 Epoch 19/25 259/259 [==============================] - 31s 121ms/step - loss: 0.5467 - accuracy: 0.7228 - val_loss: 0.5389 - val_accuracy: 0.7280 Epoch 20/25 259/259 [==============================] - 31s 120ms/step - loss: 0.5330 - accuracy: 0.7373 - val_loss: 0.5642 - val_accuracy: 0.7148 Epoch 21/25 259/259 [==============================] - 31s 120ms/step - loss: 0.5249 - accuracy: 0.7381 - val_loss: 0.5100 - val_accuracy: 0.7427 Epoch 22/25 259/259 [==============================] - 31s 121ms/step - loss: 0.5123 - accuracy: 0.7474 - val_loss: 0.5271 - val_accuracy: 0.7285 Epoch 23/25 259/259 [==============================] - 32s 122ms/step - loss: 0.5038 - accuracy: 0.7552 - val_loss: 0.4954 - val_accuracy: 0.7544 Epoch 24/25 259/259 [==============================] - 31s 120ms/step - loss: 0.5033 - accuracy: 0.7537 - val_loss: 0.4927 - val_accuracy: 0.7588 Epoch 25/25 259/259 [==============================] - 31s 120ms/step - loss: 0.4878 - accuracy: 0.7620 - val_loss: 0.4820 - val_accuracy: 0.7598
# lets visualize our new training_logs.csv file to find max accuracy and loss values
log_path = '/content/Adamtraining_logs.csv'
df = pd.read_csv(log_path, delimiter=';')
df
| epoch | accuracy | loss | val_accuracy | val_loss | |
|---|---|---|---|---|---|
| 0 | 0 | 0.539020 | 0.690437 | 0.547852 | 0.689944 |
| 1 | 1 | 0.562734 | 0.686640 | 0.563965 | 0.682761 |
| 2 | 2 | 0.569268 | 0.683306 | 0.565918 | 0.689816 |
| 3 | 3 | 0.572414 | 0.679533 | 0.581543 | 0.672747 |
| 4 | 4 | 0.581730 | 0.673555 | 0.565918 | 0.677783 |
| 5 | 5 | 0.577979 | 0.673982 | 0.602051 | 0.668389 |
| 6 | 6 | 0.586449 | 0.669636 | 0.612793 | 0.659278 |
| 7 | 7 | 0.612583 | 0.660856 | 0.625488 | 0.653203 |
| 8 | 8 | 0.614277 | 0.651432 | 0.636719 | 0.646522 |
| 9 | 9 | 0.628796 | 0.646218 | 0.650391 | 0.632567 |
| 10 | 10 | 0.636540 | 0.638411 | 0.650879 | 0.622854 |
| 11 | 11 | 0.649123 | 0.623278 | 0.671875 | 0.612239 |
| 12 | 12 | 0.663158 | 0.614859 | 0.680176 | 0.598472 |
| 13 | 13 | 0.678040 | 0.602126 | 0.673828 | 0.598157 |
| 14 | 14 | 0.689534 | 0.586369 | 0.702148 | 0.571798 |
| 15 | 15 | 0.683122 | 0.591968 | 0.708008 | 0.565826 |
| 16 | 16 | 0.708409 | 0.569003 | 0.704590 | 0.567496 |
| 17 | 17 | 0.709619 | 0.559664 | 0.702148 | 0.564764 |
| 18 | 18 | 0.722807 | 0.546677 | 0.728027 | 0.538946 |
| 19 | 19 | 0.737326 | 0.533014 | 0.714844 | 0.564228 |
| 20 | 20 | 0.738052 | 0.524917 | 0.742676 | 0.510024 |
| 21 | 21 | 0.747368 | 0.512303 | 0.728516 | 0.527073 |
| 22 | 22 | 0.755233 | 0.503846 | 0.754395 | 0.495352 |
| 23 | 23 | 0.753660 | 0.503300 | 0.758789 | 0.492701 |
| 24 | 24 | 0.762008 | 0.487846 | 0.759766 | 0.481956 |
For epoch 25 (24 in the dataframe) we can see that Adam optimizer gives us a train accuracy of 0.762, a train loss of 0.487, a validation accuracy of 0.759, and a validation loss of 0.481. The time is took for it to complete was -> 31s 120ms/step.
CNN Adam Optimizer Highest Scores from DataFrame
max_acc = df['accuracy'].max()
max_loss = df['loss'].max()
train_acc = round(max_acc, 3)
train_loss = round(max_loss, 3)
print('The highest train loss of CNN Model (Adam optimizer) is:', train_loss)
print('The highest train accuracy of CNN Model (Adam optimizer) is:', train_acc)
The highest train loss of CNN Model (Adam optimizer) is: 0.69 The highest train accuracy of CNN Model (Adam optimizer) is: 0.762
max_val_acc = df['val_accuracy'].max()
max_val_loss = df['val_loss'].max()
val_acc = round(max_val_acc, 3)
val_loss = round(max_val_loss, 3)
print('The highest validation loss of CNN Model (Adam optimizer) is:', val_loss)
print('The highest validation accuracy of CNN Model (Adam optimizer) is:', val_acc)
The highest validation loss of CNN Model (Adam optimizer) is: 0.69 The highest validation accuracy of CNN Model (Adam optimizer) is: 0.76
For the testing, we used 2 different approaches: model.evaluate_generator (using the sources provided) and the tutorial way slightly modified.
# https://stackoverflow.com/questions/63684459/should-i-use-evaluate-generator-or-evaluate-to-evaluate-my-cnn-model
# https://www.tensorflow.org/guide/keras/train_and_evaluate
# evaluate_generator is part of keras module in TensorFlow
test_loss, test_accuracy = model.evaluate_generator(test_generator, steps=len(X_test) // batch)
tst_acc = round(test_accuracy,3)
tst_loss = round(test_loss,3)
print('The test loss for CNN Model is:', tst_loss)
print('The test accuracy for CNN Model is:', tst_acc)
The test loss for CNN Model is: 0.487 The test accuracy for CNN Model is: 0.76
Just like Training and Validation, our Test follows the same format as the tutorial with some few changes to fit our data and to print out only 6 cat or dog images. As Sklearn, we are using model,predict(X_test) to test our model with a new unknown set of images. Th predicted probabilities from model.predict() will be stored in a csv file containing image name and probabilities side by side in the entire X_test dataset. However, in our case, we just want 6 images for visualization purposes so the test.csv file will only contain 6 images along their respective probabilities.
TEST_SIZE = len(X_test)
TEST_FILE = 'Adamtest_results.csv'
probabilities = model.predict(X_test)
with open(TEST_FILE, 'w', newline='') as csvfile:
fieldnames = ['filename', 'probability']
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
# randomly select 6 images
indices = random.sample(range(TEST_SIZE), 6)
fig, axes = plt.subplots(nrows=2, ncols=3, figsize=(12, 8))
ax = axes.ravel()
for i, index in enumerate(indices):
filename = f"image{index}.jpg"
probability = probabilities[index]
if probability[0] > 0.5:
result = "%.2f" % (probability[0]*100) + "% dog"
else:
result = "%.2f" % ((1-probability[0])*100) + "% cat"
writer.writerow({'filename': filename, 'probability': result})
# plot the image with predicted probability
ax[i].imshow(X_test[index])
ax[i].set_title(result)
ax[i].axis('off')
plt.tight_layout()
plt.show()
82/82 [==============================] - 1s 14ms/step
test_path = '/content/Adamtest_results.csv'
df = pd.read_csv(test_path)
df
| filename | probability | |
|---|---|---|
| 0 | image1905.jpg | 100.00% cat |
| 1 | image2151.jpg | 100.00% cat |
| 2 | image1408.jpg | 100.00% cat |
| 3 | image1113.jpg | 100.00% cat |
| 4 | image309.jpg | 100.00% cat |
| 5 | image1963.jpg | 100.00% dog |
Changing optimizer matters, RMSprop gave us worse predictions compared to Adam. A we all know; Adam has been the to go optimizer in this whole project. Although Adam gave better results in compared to RMS, our CNN model still needs more tunning for it to work 100% correctly (if possible*). Even though some of the above images won’t match, our new CNN still predict images way better (while using Adam optimizer) than the PyTorch MLP classification model from Phase 3.
y_pred = model.predict(X_test)
y_pred_classes = np.argmax(y_pred, axis=1)
target_names = ['cat', 'dog']
print(classification_report(np.argmax(y_test, axis=1), y_pred_classes, target_names=target_names))
82/82 [==============================] - 1s 11ms/step
precision recall f1-score support
cat 0.82 0.29 0.43 1209
dog 0.60 0.94 0.74 1385
accuracy 0.64 2594
macro avg 0.71 0.62 0.58 2594
weighted avg 0.70 0.64 0.59 2594
If we look closer to the classification report, we can see that the metrics indicate that our model is having an easier time identifying dog images than cat images. We have the same scenario back in Phase 3 where the model was having a hard time trying to correctly classify cat images. However, if we compare the Keras CNN model vs the PyTorch MLP model, we can see a significant increase in performance. Just by looking at the classification, we see that the model is performing way better than the MLP model from Phase 3. It is a major improvement.
PyTorch MLP Model Classification Report
y_pred_binary = np.argmax(y_pred, axis=1)
cm = confusion_matrix(y_pred_classes, y_pred_binary)
disp = ConfusionMatrixDisplay(confusion_matrix=cm, display_labels=target_names)
disp.plot()
plt.show()
Again, we can see that our CNN Model (using Adam Optimzer) is performing way better than PyTorch MLP. The confusion matrix shows us how the model is performing regarding classifying both classes. We know there's more tunning to do, but the fact that it shows 0 for both FN and FP it's a good sign. Major improvement since Phase 3.
Take a look at Phase 3 PyTorch MLP Model Confusion Matrix:
You can search through Tensorboard for any graph or any other relevant information regarding CNN Model behavior using this optimizer.
adamdf = pd.read_csv('/content/Adamtraining_logs.csv', delimiter=';')
ADAM_ACC = round(adamdf['accuracy'].max(), 3)
ADAM_LOSS = round(adamdf['loss'].max(), 3)
ADAM_VAL_ACC = round(adamdf['val_accuracy'].max(), 3)
ADAM_VAL_LOSS = round(adamdf['val_loss'].max(), 3)
ADAM = pd.DataFrame({'Adam Train Accuracy': [ADAM_ACC], 'Adam Train loss': [ADAM_LOSS], 'Adam Validation Accuracy': [ADAM_VAL_ACC], 'Adam Validation Loss': [ADAM_VAL_LOSS]})
ADAM
| Adam Train Accuracy | Adam Train loss | Adam Validation Accuracy | Adam Validation Loss | |
|---|---|---|---|---|
| 0 | 0.762 | 0.69 | 0.76 | 0.69 |
Adam Training loss vs Validation loss TensorBoard
rmsdf = pd.read_csv('/content/RMStraining_logs.csv', delimiter=';')
RMS_ACC = round(rmsdf['accuracy'].max(), 3)
RMS_LOSS = round(rmsdf['loss'].max(), 3)
RMS_VAL_ACC = round(rmsdf['val_accuracy'].max(), 3)
RMS_VAL_LOSS = round(rmsdf['val_loss'].max(), 3)
RMS = pd.DataFrame({'RMSprop Train Accuracy': [RMS_ACC], 'RMS Train loss': [RMS_LOSS], 'RMS Validation Accuracy': [RMS_VAL_ACC], 'RMS Validation Loss': [RMS_VAL_LOSS]})
RMS
| RMSprop Train Accuracy | RMS Train loss | RMS Validation Accuracy | RMS Validation Loss | |
|---|---|---|---|---|
| 0 | 0.705 | 0.691 | 0.713 | 0.692 |
Test Accuracy and Validation Loss for RMSprop Optimizer
RMSprop Training loss vs Validation loss TensorBoard
As we have mentioned before, the scores and graphs just show that Adam Optimizer has always been the optimizer that gives better results overall at least from a personal point of view. Adam returns 0.762 for train accuracy and 0.76 for validation accuracy. Meanwhile, RMSprop returns 0.705 for train accuracy and 0.713 for validation accuracy. Also, Adam optimizer returns the highest test accuracy of 0.76 vs RMSprop test accuracy of 0.695.
We tried to come up with the idea of using the entire Kaggle dataset for the Cat and Dog competition to generate even better results. After numerous attempts we finally managed to load the entire Kaggle dataset to our Google Drive and then proceeded to unzip the files. After that we created some subdirectories for cat and dog classes in train, validation, and test1 files to work with the entire data using ‘Keras .flow_from_directory()’ (required for this ImageDataGenerator step). However, we were unsuccess to even train the model due to the amount of resources the training needed per epoch. From the 100% computing units Google Colab Pro provides, almost 30% was consumed within the first 25 minutes of the first epoch training. We ended up dropping the idea and continue using the Sklearn method provided from HWs.
Fully Convolutional Networks (FCN) are similar to convolutional neural networks (CNN) but FCN do not use dense layers which receive input from all the neurons from the previous layer and instead are composed of convolutional layers from end-to-end. FCN used in this project is based on this tutorial and is composed of a total of 30 layers which are a combination of several convolutional 2d, dropout, normalization, and activation layers. The very last layer is a softmax function which is used for the object classification.
I did several experiments with changing batch sizes, different epochs, optimizers (Adam and SGD), and different dropout rates in the model. Based on the previous experiments, below are represented examples of Adam with batch size of 25 and Adam and SGD with batch size 50 which yielded the best results. For the batch size of 50, I needed to use Google Colab runtime with High-RAM and GPU.
Resources:
Model representation from TensorBoard.
This section loads the data and prepare it for for the FCN model.
This section loads all necessary Python libraries and connects the Colab environment to the data located in a shared Google Drive directory.
import pandas as pd
import numpy as np
import csv
import random
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.utils import plot_model
from keras.callbacks import CSVLogger
from google.colab import auth
from google.auth import default
from google.colab import drive
'''
Mounting Google Drive to the current Colab session
How to Connect Google Colab with Google Drive:
https://www.marktechpost.com/2019/06/07/how-to-connect-google-colab-with-google-drive/
'''
drive.mount('/content/drive', force_remount=True)
Mounted at /content/drive
Loading the dataset from the previous project Phases. Each image is represented as 128x128x3 NumPy array.
colab_path = '/content/drive/MyDrive/aml/aml/data/martin'
X = np.load(colab_path + '/img.npy', allow_pickle=True)
y_label = np.load(colab_path + '/y_label.npy', allow_pickle=True)
y_bbox = np.load(colab_path + '/y_bbox.npy', allow_pickle=True)
Splitting the dataset on 80% for training and 20% for testing. And then further splitting the training dataset to 80% for training and 20% for validation.
# Testing
X_train, X_test, y_train, y_test_label = train_test_split(X, y_label, test_size=0.2, random_state=42)
# Trainging and Validation
X_train, X_valid, y_train, y_valid = train_test_split(X_train, y_train, test_size=0.2, random_state=42)
print('Training size:', X_train.shape)
print('Validation size:', X_valid.shape)
print('Testing size:', X_test.shape)
Training size: (8297, 49152) Validation size: (2075, 49152) Testing size: (2594, 49152)
Reshaping datasets for further augmentation.
X_train = X_train.reshape((-1, 128, 128, 3))
X_valid = X_valid.reshape((-1, 128, 128, 3))
X_test = X_test.reshape((-1, 128, 128, 3))
num_classes = len(set(y_label))
y_train = to_categorical(y_train, num_classes)
y_valid = to_categorical(y_valid, num_classes)
y_test = to_categorical(y_test_label, num_classes)
This step augments the images in various ways (rotation, shifting width, shifting height, zoom, horizontal flip, etc.). The purpose is to provide larger input data variety to the model.
#https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image/ImageDataGenerator
train_generator = ImageDataGenerator(
rescale=1./255,
rotation_range=30,
width_shift_range=0.1,
height_shift_range=0.1,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
# Not augmenting validation and testing images
valid_generator = ImageDataGenerator(rescale=1./255)
test_generator = ImageDataGenerator(rescale=1./255)
train_generator.fit(X_train)
valid_generator.fit(X_valid)
test_generator.fit(X_test)
Creating the generators with batch sizes of 25.
batch = 25
train_generator = train_generator.flow(X_train, y_train, batch_size=batch)
valid_generator = valid_generator.flow(X_valid, y_valid, batch_size=batch)
test_generator = test_generator.flow(X_test, y_test, batch_size=batch)
TensorBoard is one of several ways we can visualize a model's progres either in real time or after the fact. TensorBoard in this project represents both training and validation in the real time.
Since each training task initializes a different TensorBoard callback, multiple training and validation graphs can be visualized in one instance of TensorBoard.
# Clear any logs from previous runs
!rm -rf logs/
import tensorflow as tf
import datetime
!pip install -q -U tensorboard
%load_ext tensorboard
log_dir = "logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)
%tensorboard --logdir logs/fit
FCN model is based on this tutorial.
import tensorflow as tf
def FCN_model(len_classes=2, dropout_rate=0.2):
input = tf.keras.layers.Input(shape=(None, None, 3))
x = tf.keras.layers.Conv2D(filters=32, kernel_size=3, strides=1)(input)
x = tf.keras.layers.Dropout(dropout_rate)(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.Activation('relu')(x)
# x = tf.keras.layers.MaxPooling2D()(x)
x = tf.keras.layers.Conv2D(filters=64, kernel_size=3, strides=1)(x)
x = tf.keras.layers.Dropout(dropout_rate)(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.Activation('relu')(x)
# x = tf.keras.layers.MaxPooling2D()(x)
x = tf.keras.layers.Conv2D(filters=128, kernel_size=3, strides=2)(x)
x = tf.keras.layers.Dropout(dropout_rate)(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.Activation('relu')(x)
# x = tf.keras.layers.MaxPooling2D()(x)
x = tf.keras.layers.Conv2D(filters=256, kernel_size=3, strides=2)(x)
x = tf.keras.layers.Dropout(dropout_rate)(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.Activation('relu')(x)
# x = tf.keras.layers.MaxPooling2D()(x)
x = tf.keras.layers.Conv2D(filters=512, kernel_size=3, strides=2)(x)
x = tf.keras.layers.Dropout(dropout_rate)(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.Activation('relu')(x)
# Uncomment the below line if you're using dense layers
# x = tf.keras.layers.GlobalMaxPooling2D()(x)
# Fully connected layer 1
# x = tf.keras.layers.Dropout(dropout_rate)(x)
# x = tf.keras.layers.BatchNormalization()(x)
# x = tf.keras.layers.Dense(units=64)(x)
# x = tf.keras.layers.Activation('relu')(x)
# Fully connected layer 1
x = tf.keras.layers.Conv2D(filters=64, kernel_size=1, strides=1)(x)
x = tf.keras.layers.Dropout(dropout_rate)(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.Activation('relu')(x)
# Fully connected layer 2
# x = tf.keras.layers.Dropout(dropout_rate)(x)
# x = tf.keras.layers.BatchNormalization()(x)
# x = tf.keras.layers.Dense(units=len_classes)(x)
# predictions = tf.keras.layers.Activation('softmax')(x)
# Fully connected layer 2
x = tf.keras.layers.Conv2D(filters=len_classes, kernel_size=1, strides=1)(x)
x = tf.keras.layers.Dropout(dropout_rate)(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.GlobalMaxPooling2D()(x)
predictions = tf.keras.layers.Activation('softmax')(x)
model = tf.keras.Model(inputs=input, outputs=predictions)
return model
model = FCN_model(len_classes=2, dropout_rate=0.2)
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.0001),
loss='binary_crossentropy',
metrics=['accuracy'])
print(model.summary())
print(f'Total number of layers: {len(model.layers)}')
Model: "model"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, None, None, 3)] 0
conv2d (Conv2D) (None, None, None, 32) 896
dropout (Dropout) (None, None, None, 32) 0
batch_normalization (BatchN (None, None, None, 32) 128
ormalization)
activation (Activation) (None, None, None, 32) 0
conv2d_1 (Conv2D) (None, None, None, 64) 18496
dropout_1 (Dropout) (None, None, None, 64) 0
batch_normalization_1 (Batc (None, None, None, 64) 256
hNormalization)
activation_1 (Activation) (None, None, None, 64) 0
conv2d_2 (Conv2D) (None, None, None, 128) 73856
dropout_2 (Dropout) (None, None, None, 128) 0
batch_normalization_2 (Batc (None, None, None, 128) 512
hNormalization)
activation_2 (Activation) (None, None, None, 128) 0
conv2d_3 (Conv2D) (None, None, None, 256) 295168
dropout_3 (Dropout) (None, None, None, 256) 0
batch_normalization_3 (Batc (None, None, None, 256) 1024
hNormalization)
activation_3 (Activation) (None, None, None, 256) 0
conv2d_4 (Conv2D) (None, None, None, 512) 1180160
dropout_4 (Dropout) (None, None, None, 512) 0
batch_normalization_4 (Batc (None, None, None, 512) 2048
hNormalization)
activation_4 (Activation) (None, None, None, 512) 0
conv2d_5 (Conv2D) (None, None, None, 64) 32832
dropout_5 (Dropout) (None, None, None, 64) 0
batch_normalization_5 (Batc (None, None, None, 64) 256
hNormalization)
activation_5 (Activation) (None, None, None, 64) 0
conv2d_6 (Conv2D) (None, None, None, 2) 130
dropout_6 (Dropout) (None, None, None, 2) 0
batch_normalization_6 (Batc (None, None, None, 2) 8
hNormalization)
global_max_pooling2d (Globa (None, 2) 0
lMaxPooling2D)
activation_6 (Activation) (None, 2) 0
=================================================================
Total params: 1,605,770
Trainable params: 1,603,654
Non-trainable params: 2,116
_________________________________________________________________
None
Total number of layers: 30
# https://machinelearningmastery.com/visualize-deep-learning-neural-network-model-keras/
plot_model(model, to_file='model_Adam.png', show_shapes=True)
epochs = 50
csv_log = 'training_adam_25_logs.csv'
history = model.fit_generator(
train_generator,
steps_per_epoch=len(X_train) // batch,
epochs=epochs,
validation_data=valid_generator,
validation_steps=len(X_valid) // batch,
callbacks=[CSVLogger(csv_log, append=False, separator=";"), tensorboard_callback]
)
<ipython-input-17-6a008910a0d7>:5: UserWarning: `Model.fit_generator` is deprecated and will be removed in a future version. Please use `Model.fit`, which supports generators. history = model.fit_generator(
Epoch 1/50 331/331 [==============================] - 56s 116ms/step - loss: 0.7610 - accuracy: 0.5005 - val_loss: 0.7101 - val_accuracy: 0.4988 Epoch 2/50 331/331 [==============================] - 37s 112ms/step - loss: 0.7120 - accuracy: 0.5203 - val_loss: 0.7035 - val_accuracy: 0.5330 Epoch 3/50 331/331 [==============================] - 38s 113ms/step - loss: 0.7046 - accuracy: 0.5384 - val_loss: 0.6996 - val_accuracy: 0.5316 Epoch 4/50 331/331 [==============================] - 38s 115ms/step - loss: 0.7007 - accuracy: 0.5274 - val_loss: 0.6973 - val_accuracy: 0.5316 Epoch 5/50 331/331 [==============================] - 38s 115ms/step - loss: 0.6980 - accuracy: 0.5255 - val_loss: 0.6960 - val_accuracy: 0.5316 Epoch 6/50 331/331 [==============================] - 38s 115ms/step - loss: 0.6964 - accuracy: 0.5273 - val_loss: 0.6949 - val_accuracy: 0.5316 Epoch 7/50 331/331 [==============================] - 38s 116ms/step - loss: 0.6950 - accuracy: 0.5261 - val_loss: 0.6940 - val_accuracy: 0.5316 Epoch 8/50 331/331 [==============================] - 39s 117ms/step - loss: 0.6942 - accuracy: 0.5255 - val_loss: 0.6934 - val_accuracy: 0.5316 Epoch 9/50 331/331 [==============================] - 38s 116ms/step - loss: 0.6933 - accuracy: 0.5266 - val_loss: 0.6929 - val_accuracy: 0.5316 Epoch 10/50 331/331 [==============================] - 39s 116ms/step - loss: 0.6924 - accuracy: 0.5261 - val_loss: 0.6926 - val_accuracy: 0.5316 Epoch 11/50 331/331 [==============================] - 39s 118ms/step - loss: 0.6915 - accuracy: 0.5267 - val_loss: 0.6920 - val_accuracy: 0.5316 Epoch 12/50 331/331 [==============================] - 39s 117ms/step - loss: 0.6912 - accuracy: 0.5268 - val_loss: 0.6915 - val_accuracy: 0.5316 Epoch 13/50 331/331 [==============================] - 39s 117ms/step - loss: 0.6907 - accuracy: 0.5265 - val_loss: 0.6911 - val_accuracy: 0.5316 Epoch 14/50 331/331 [==============================] - 39s 117ms/step - loss: 0.6900 - accuracy: 0.5317 - val_loss: 0.6909 - val_accuracy: 0.5316 Epoch 15/50 331/331 [==============================] - 38s 116ms/step - loss: 0.6892 - accuracy: 0.5366 - val_loss: 0.6902 - val_accuracy: 0.5316 Epoch 16/50 331/331 [==============================] - 39s 117ms/step - loss: 0.6890 - accuracy: 0.5389 - val_loss: 0.6900 - val_accuracy: 0.5316 Epoch 17/50 331/331 [==============================] - 39s 117ms/step - loss: 0.6885 - accuracy: 0.5465 - val_loss: 0.6893 - val_accuracy: 0.5398 Epoch 18/50 331/331 [==============================] - 39s 117ms/step - loss: 0.6881 - accuracy: 0.5430 - val_loss: 0.6897 - val_accuracy: 0.5292 Epoch 19/50 331/331 [==============================] - 39s 117ms/step - loss: 0.6879 - accuracy: 0.5505 - val_loss: 0.6901 - val_accuracy: 0.5316 Epoch 20/50 331/331 [==============================] - 38s 116ms/step - loss: 0.6876 - accuracy: 0.5457 - val_loss: 0.6882 - val_accuracy: 0.5316 Epoch 21/50 331/331 [==============================] - 38s 116ms/step - loss: 0.6872 - accuracy: 0.5571 - val_loss: 0.6892 - val_accuracy: 0.5316 Epoch 22/50 331/331 [==============================] - 38s 116ms/step - loss: 0.6866 - accuracy: 0.5580 - val_loss: 0.6887 - val_accuracy: 0.5311 Epoch 23/50 331/331 [==============================] - 38s 116ms/step - loss: 0.6859 - accuracy: 0.5539 - val_loss: 0.6882 - val_accuracy: 0.5316 Epoch 24/50 331/331 [==============================] - 38s 116ms/step - loss: 0.6849 - accuracy: 0.5586 - val_loss: 0.6861 - val_accuracy: 0.5388 Epoch 25/50 331/331 [==============================] - 39s 117ms/step - loss: 0.6847 - accuracy: 0.5699 - val_loss: 0.6846 - val_accuracy: 0.5561 Epoch 26/50 331/331 [==============================] - 38s 115ms/step - loss: 0.6846 - accuracy: 0.5602 - val_loss: 0.6830 - val_accuracy: 0.5455 Epoch 27/50 331/331 [==============================] - 38s 115ms/step - loss: 0.6828 - accuracy: 0.5666 - val_loss: 0.6855 - val_accuracy: 0.5354 Epoch 28/50 331/331 [==============================] - 38s 116ms/step - loss: 0.6822 - accuracy: 0.5588 - val_loss: 0.6819 - val_accuracy: 0.5320 Epoch 29/50 331/331 [==============================] - 38s 116ms/step - loss: 0.6823 - accuracy: 0.5691 - val_loss: 0.6836 - val_accuracy: 0.5340 Epoch 30/50 331/331 [==============================] - 38s 116ms/step - loss: 0.6813 - accuracy: 0.5653 - val_loss: 0.6827 - val_accuracy: 0.5398 Epoch 31/50 331/331 [==============================] - 38s 116ms/step - loss: 0.6824 - accuracy: 0.5746 - val_loss: 0.6821 - val_accuracy: 0.5316 Epoch 32/50 331/331 [==============================] - 39s 118ms/step - loss: 0.6805 - accuracy: 0.5771 - val_loss: 0.6821 - val_accuracy: 0.5610 Epoch 33/50 331/331 [==============================] - 38s 116ms/step - loss: 0.6808 - accuracy: 0.5740 - val_loss: 0.6805 - val_accuracy: 0.5528 Epoch 34/50 331/331 [==============================] - 39s 116ms/step - loss: 0.6790 - accuracy: 0.5783 - val_loss: 0.6787 - val_accuracy: 0.5576 Epoch 35/50 331/331 [==============================] - 38s 116ms/step - loss: 0.6783 - accuracy: 0.5833 - val_loss: 0.6813 - val_accuracy: 0.5311 Epoch 36/50 331/331 [==============================] - 38s 116ms/step - loss: 0.6772 - accuracy: 0.5834 - val_loss: 0.6847 - val_accuracy: 0.5335 Epoch 37/50 331/331 [==============================] - 38s 115ms/step - loss: 0.6776 - accuracy: 0.5893 - val_loss: 0.6791 - val_accuracy: 0.5523 Epoch 38/50 331/331 [==============================] - 38s 115ms/step - loss: 0.6780 - accuracy: 0.5843 - val_loss: 0.6789 - val_accuracy: 0.5614 Epoch 39/50 331/331 [==============================] - 39s 118ms/step - loss: 0.6756 - accuracy: 0.5914 - val_loss: 0.6784 - val_accuracy: 0.5325 Epoch 40/50 331/331 [==============================] - 38s 115ms/step - loss: 0.6767 - accuracy: 0.5909 - val_loss: 0.6815 - val_accuracy: 0.5441 Epoch 41/50 331/331 [==============================] - 38s 115ms/step - loss: 0.6749 - accuracy: 0.6022 - val_loss: 0.6774 - val_accuracy: 0.5508 Epoch 42/50 331/331 [==============================] - 38s 115ms/step - loss: 0.6745 - accuracy: 0.5961 - val_loss: 0.6772 - val_accuracy: 0.5605 Epoch 43/50 331/331 [==============================] - 38s 115ms/step - loss: 0.6744 - accuracy: 0.6022 - val_loss: 0.6762 - val_accuracy: 0.5769 Epoch 44/50 331/331 [==============================] - 38s 115ms/step - loss: 0.6743 - accuracy: 0.6063 - val_loss: 0.6762 - val_accuracy: 0.6164 Epoch 45/50 331/331 [==============================] - 43s 128ms/step - loss: 0.6716 - accuracy: 0.6133 - val_loss: 0.6742 - val_accuracy: 0.5942 Epoch 46/50 331/331 [==============================] - 39s 117ms/step - loss: 0.6731 - accuracy: 0.6134 - val_loss: 0.6788 - val_accuracy: 0.5677 Epoch 47/50 331/331 [==============================] - 43s 130ms/step - loss: 0.6711 - accuracy: 0.6221 - val_loss: 0.6810 - val_accuracy: 0.5571 Epoch 48/50 331/331 [==============================] - 39s 118ms/step - loss: 0.6723 - accuracy: 0.6106 - val_loss: 0.6737 - val_accuracy: 0.6173 Epoch 49/50 331/331 [==============================] - 39s 118ms/step - loss: 0.6706 - accuracy: 0.6181 - val_loss: 0.6746 - val_accuracy: 0.6275 Epoch 50/50 331/331 [==============================] - 39s 118ms/step - loss: 0.6709 - accuracy: 0.6203 - val_loss: 0.6748 - val_accuracy: 0.5894
# https://stackoverflow.com/questions/63684459/should-i-use-evaluate-generator-or-evaluate-to-evaluate-my-cnn-model
# https://www.tensorflow.org/guide/keras/train_and_evaluate
# evaluate_generator is part of keras module in TensorFlow
model1_test_loss, model1_test_accuracy = model.evaluate_generator(test_generator, steps=len(X_test) // batch)
print()
print('Test accuracy (FCN, batch size 25, Adam optimizer) = ', round(model1_test_accuracy,3))
print('Test loss (FCN, batch size 25, Adam optimizer) = ', round(model1_test_loss,3))
<ipython-input-19-f0989dcf1f0e>:5: UserWarning: `Model.evaluate_generator` is deprecated and will be removed in a future version. Please use `Model.evaluate`, which supports generators. model1_test_loss, model1_test_accuracy = model.evaluate_generator(test_generator, steps=len(X_test) // batch)
Test accuracy (FCN, batch size 25, Adam optimizer) = 0.592 Test loss (FCN, batch size 25, Adam optimizer) = 0.676
df = pd.read_csv(csv_log, delimiter=';')
# Printing out only the epoch with the highest accuracy
fcn01 = df[df['accuracy'] == df['accuracy'].max()]
display(fcn01)
| epoch | accuracy | loss | val_accuracy | val_loss | |
|---|---|---|---|---|---|
| 46 | 46 | 0.622099 | 0.671054 | 0.557108 | 0.680974 |
fcn01_numpy = fcn01.to_numpy()
model1_data = np.array([['FCN (Adam, batch 25)', fcn01_numpy[0][0],
round(fcn01_numpy[0][1], 3), round(fcn01_numpy[0][3], 3), round(model1_test_accuracy, 3),
round(fcn01_numpy[0][2], 3), round(fcn01_numpy[0][4], 3), round(model1_test_loss, 3)]])
fcn_results = pd.DataFrame(data = model1_data, columns = ['Model', 'Epoch', 'Train Accuracy', 'Valid Accuracy', 'Test Accuracy', 'Train Loss', 'Valid Loss', 'Test Loss'])
display(fcn_results)
| Model | Epoch | Train Accuracy | Valid Accuracy | Test Accuracy | Train Loss | Valid Loss | Test Loss | |
|---|---|---|---|---|---|---|---|---|
| 0 | FCN (Adam, batch 25) | 46.0 | 0.622 | 0.557 | 0.592 | 0.671 | 0.681 | 0.676 |
TEST_SIZE = len(X_test)
TEST_FILE = 'testing_adam_25_results.csv'
probabilities = model.predict(X_test)
with open(TEST_FILE, 'w', newline='') as csvfile:
fieldnames = ['filename', 'probability']
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
# randomly select 6 images
indices = random.sample(range(TEST_SIZE), 6)
fig, axes = plt.subplots(nrows=2, ncols=3, figsize=(12, 8))
ax = axes.ravel()
for i, index in enumerate(indices):
filename = f"image{index}.jpg"
probability = probabilities[index]
if probability[0] > 0.5:
result = "%.2f" % (probability[0]*100) + "% dog"
else:
result = "%.2f" % ((1-probability[0])*100) + "% cat"
writer.writerow({'filename': filename, 'probability': result})
# plot the image with predicted probability
ax[i].imshow(X_test[index])
ax[i].set_title(result)
ax[i].axis('off')
plt.tight_layout()
plt.show()
82/82 [==============================] - 2s 24ms/step
y_pred = model.predict(X_test)
y_pred_classes = np.argmax(y_pred, axis=1)
# Generate classification report
target_names = ['cat', 'dog']
print(classification_report(np.argmax(y_test, axis=1), y_pred_classes, target_names=target_names))
82/82 [==============================] - 3s 25ms/step
precision recall f1-score support
cat 0.48 0.72 0.57 1209
dog 0.56 0.31 0.40 1385
accuracy 0.50 2594
macro avg 0.52 0.52 0.49 2594
weighted avg 0.52 0.50 0.48 2594
This model uses the same FCM model with learning rate 0.0001
This is needed because the batch size is doubled. Steps are the same as described in the section 4.1.
colab_path = '/content/drive/MyDrive/aml/aml/data/martin'
X = np.load(colab_path + '/img.npy', allow_pickle=True)
y_label = np.load(colab_path + '/y_label.npy', allow_pickle=True)
y_bbox = np.load(colab_path + '/y_bbox.npy', allow_pickle=True)
# Testing
X_train, X_test, y_train, y_test_label = train_test_split(X, y_label, test_size=0.2, random_state=42)
# Trainging and Validation
X_train, X_valid, y_train, y_valid = train_test_split(X_train, y_train, test_size=0.2, random_state=42)
X_train = X_train.reshape((-1, 128, 128, 3))
X_valid = X_valid.reshape((-1, 128, 128, 3))
X_test = X_test.reshape((-1, 128, 128, 3))
num_classes = len(set(y_label))
y_train = to_categorical(y_train, num_classes)
y_valid = to_categorical(y_valid, num_classes)
y_test = to_categorical(y_test_label, num_classes)
#https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image/ImageDataGenerator
train_generator = ImageDataGenerator(
rescale=1./255,
rotation_range=30,
width_shift_range=0.1,
height_shift_range=0.1,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
valid_generator = ImageDataGenerator(rescale=1./255)
test_generator = ImageDataGenerator(rescale=1./255)
train_generator.fit(X_train)
valid_generator.fit(X_valid)
test_generator.fit(X_test)
The batch size of 50 is the only difference compared to the section 4.1 Preparation.
batch = 50
train_generator = train_generator.flow(X_train, y_train, batch_size=batch)
valid_generator = valid_generator.flow(X_valid, y_valid, batch_size=batch)
test_generator = test_generator.flow(X_test, y_test, batch_size=batch)
model2 = FCN_model(len_classes=2, dropout_rate=0.2)
model2.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=0.0001),
loss='binary_crossentropy',
metrics=['accuracy'])
log_dir = "logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)
epochs = 50
csv_log = 'training_sgd_50_logs.csv'
history = model2.fit_generator(
train_generator,
steps_per_epoch=len(X_train) // batch,
epochs=epochs,
validation_data=valid_generator,
validation_steps=len(X_valid) // batch,
callbacks=[CSVLogger(csv_log, append=False, separator=";"), tensorboard_callback]
)
<ipython-input-45-66864dc74f05>:5: UserWarning: `Model.fit_generator` is deprecated and will be removed in a future version. Please use `Model.fit`, which supports generators. history = model2.fit_generator(
Epoch 1/50 165/165 [==============================] - 44s 234ms/step - loss: 1.6133 - accuracy: 0.4824 - val_loss: 0.7025 - val_accuracy: 0.5332 Epoch 2/50 165/165 [==============================] - 36s 215ms/step - loss: 1.5000 - accuracy: 0.4731 - val_loss: 0.7261 - val_accuracy: 0.5312 Epoch 3/50 165/165 [==============================] - 36s 217ms/step - loss: 1.4254 - accuracy: 0.4721 - val_loss: 0.7459 - val_accuracy: 0.5307 Epoch 4/50 165/165 [==============================] - 37s 221ms/step - loss: 1.3741 - accuracy: 0.4547 - val_loss: 0.7604 - val_accuracy: 0.5327 Epoch 5/50 165/165 [==============================] - 38s 228ms/step - loss: 1.3076 - accuracy: 0.4645 - val_loss: 0.7711 - val_accuracy: 0.5293 Epoch 6/50 165/165 [==============================] - 37s 221ms/step - loss: 1.2519 - accuracy: 0.4617 - val_loss: 0.7779 - val_accuracy: 0.5278 Epoch 7/50 165/165 [==============================] - 37s 222ms/step - loss: 1.1954 - accuracy: 0.4756 - val_loss: 0.7849 - val_accuracy: 0.5288 Epoch 8/50 165/165 [==============================] - 37s 223ms/step - loss: 1.1407 - accuracy: 0.4794 - val_loss: 0.7911 - val_accuracy: 0.5268 Epoch 9/50 165/165 [==============================] - 37s 223ms/step - loss: 1.1020 - accuracy: 0.4809 - val_loss: 0.7936 - val_accuracy: 0.5254 Epoch 10/50 165/165 [==============================] - 38s 230ms/step - loss: 1.0634 - accuracy: 0.4968 - val_loss: 0.7913 - val_accuracy: 0.5273 Epoch 11/50 165/165 [==============================] - 40s 239ms/step - loss: 1.0366 - accuracy: 0.4876 - val_loss: 0.7885 - val_accuracy: 0.5195 Epoch 12/50 165/165 [==============================] - 40s 242ms/step - loss: 1.0130 - accuracy: 0.4939 - val_loss: 0.7858 - val_accuracy: 0.5185 Epoch 13/50 165/165 [==============================] - 40s 241ms/step - loss: 0.9906 - accuracy: 0.4922 - val_loss: 0.7807 - val_accuracy: 0.5185 Epoch 14/50 165/165 [==============================] - 37s 225ms/step - loss: 0.9728 - accuracy: 0.4861 - val_loss: 0.7768 - val_accuracy: 0.5200 Epoch 15/50 165/165 [==============================] - 37s 223ms/step - loss: 0.9554 - accuracy: 0.4894 - val_loss: 0.7720 - val_accuracy: 0.5229 Epoch 16/50 165/165 [==============================] - 37s 224ms/step - loss: 0.9416 - accuracy: 0.4969 - val_loss: 0.7689 - val_accuracy: 0.5205 Epoch 17/50 165/165 [==============================] - 37s 224ms/step - loss: 0.9269 - accuracy: 0.4860 - val_loss: 0.7643 - val_accuracy: 0.5244 Epoch 18/50 165/165 [==============================] - 37s 226ms/step - loss: 0.9157 - accuracy: 0.4975 - val_loss: 0.7604 - val_accuracy: 0.5249 Epoch 19/50 165/165 [==============================] - 38s 229ms/step - loss: 0.9044 - accuracy: 0.5027 - val_loss: 0.7568 - val_accuracy: 0.5220 Epoch 20/50 165/165 [==============================] - 37s 223ms/step - loss: 0.8976 - accuracy: 0.4862 - val_loss: 0.7539 - val_accuracy: 0.5229 Epoch 21/50 165/165 [==============================] - 37s 223ms/step - loss: 0.8875 - accuracy: 0.4853 - val_loss: 0.7512 - val_accuracy: 0.5239 Epoch 22/50 165/165 [==============================] - 37s 223ms/step - loss: 0.8792 - accuracy: 0.4876 - val_loss: 0.7481 - val_accuracy: 0.5210 Epoch 23/50 165/165 [==============================] - 38s 229ms/step - loss: 0.8716 - accuracy: 0.4915 - val_loss: 0.7455 - val_accuracy: 0.5176 Epoch 24/50 165/165 [==============================] - 37s 225ms/step - loss: 0.8638 - accuracy: 0.4955 - val_loss: 0.7431 - val_accuracy: 0.5210 Epoch 25/50 165/165 [==============================] - 37s 225ms/step - loss: 0.8583 - accuracy: 0.4866 - val_loss: 0.7409 - val_accuracy: 0.5239 Epoch 26/50 165/165 [==============================] - 37s 224ms/step - loss: 0.8525 - accuracy: 0.4916 - val_loss: 0.7387 - val_accuracy: 0.5229 Epoch 27/50 165/165 [==============================] - 37s 225ms/step - loss: 0.8470 - accuracy: 0.4979 - val_loss: 0.7368 - val_accuracy: 0.5229 Epoch 28/50 165/165 [==============================] - 37s 224ms/step - loss: 0.8407 - accuracy: 0.5118 - val_loss: 0.7349 - val_accuracy: 0.5220 Epoch 29/50 165/165 [==============================] - 37s 225ms/step - loss: 0.8354 - accuracy: 0.5020 - val_loss: 0.7328 - val_accuracy: 0.5215 Epoch 30/50 165/165 [==============================] - 37s 225ms/step - loss: 0.8316 - accuracy: 0.4874 - val_loss: 0.7314 - val_accuracy: 0.5224 Epoch 31/50 165/165 [==============================] - 37s 225ms/step - loss: 0.8271 - accuracy: 0.4922 - val_loss: 0.7300 - val_accuracy: 0.5229 Epoch 32/50 165/165 [==============================] - 37s 226ms/step - loss: 0.8233 - accuracy: 0.4973 - val_loss: 0.7288 - val_accuracy: 0.5224 Epoch 33/50 165/165 [==============================] - 37s 225ms/step - loss: 0.8188 - accuracy: 0.5020 - val_loss: 0.7274 - val_accuracy: 0.5234 Epoch 34/50 165/165 [==============================] - 37s 225ms/step - loss: 0.8151 - accuracy: 0.5054 - val_loss: 0.7260 - val_accuracy: 0.5220 Epoch 35/50 165/165 [==============================] - 37s 224ms/step - loss: 0.8117 - accuracy: 0.5038 - val_loss: 0.7248 - val_accuracy: 0.5239 Epoch 36/50 165/165 [==============================] - 38s 232ms/step - loss: 0.8090 - accuracy: 0.4986 - val_loss: 0.7238 - val_accuracy: 0.5215 Epoch 37/50 165/165 [==============================] - 39s 235ms/step - loss: 0.8061 - accuracy: 0.4929 - val_loss: 0.7225 - val_accuracy: 0.5229 Epoch 38/50 165/165 [==============================] - 41s 248ms/step - loss: 0.8028 - accuracy: 0.4895 - val_loss: 0.7215 - val_accuracy: 0.5268 Epoch 39/50 165/165 [==============================] - 41s 245ms/step - loss: 0.7993 - accuracy: 0.4996 - val_loss: 0.7205 - val_accuracy: 0.5259 Epoch 40/50 165/165 [==============================] - 39s 238ms/step - loss: 0.7969 - accuracy: 0.4991 - val_loss: 0.7199 - val_accuracy: 0.5234 Epoch 41/50 165/165 [==============================] - 39s 237ms/step - loss: 0.7942 - accuracy: 0.5047 - val_loss: 0.7187 - val_accuracy: 0.5273 Epoch 42/50 165/165 [==============================] - 41s 246ms/step - loss: 0.7915 - accuracy: 0.5008 - val_loss: 0.7180 - val_accuracy: 0.5263 Epoch 43/50 165/165 [==============================] - 41s 246ms/step - loss: 0.7891 - accuracy: 0.5005 - val_loss: 0.7173 - val_accuracy: 0.5288 Epoch 44/50 165/165 [==============================] - 41s 248ms/step - loss: 0.7862 - accuracy: 0.5072 - val_loss: 0.7166 - val_accuracy: 0.5293 Epoch 45/50 165/165 [==============================] - 39s 233ms/step - loss: 0.7844 - accuracy: 0.5085 - val_loss: 0.7161 - val_accuracy: 0.5259 Epoch 46/50 165/165 [==============================] - 40s 240ms/step - loss: 0.7826 - accuracy: 0.5106 - val_loss: 0.7151 - val_accuracy: 0.5298 Epoch 47/50 165/165 [==============================] - 40s 243ms/step - loss: 0.7808 - accuracy: 0.5019 - val_loss: 0.7146 - val_accuracy: 0.5254 Epoch 48/50 165/165 [==============================] - 40s 240ms/step - loss: 0.7789 - accuracy: 0.4978 - val_loss: 0.7140 - val_accuracy: 0.5259 Epoch 49/50 165/165 [==============================] - 41s 246ms/step - loss: 0.7757 - accuracy: 0.5161 - val_loss: 0.7133 - val_accuracy: 0.5278 Epoch 50/50 165/165 [==============================] - 40s 244ms/step - loss: 0.7752 - accuracy: 0.5059 - val_loss: 0.7130 - val_accuracy: 0.5259
# https://stackoverflow.com/questions/63684459/should-i-use-evaluate-generator-or-evaluate-to-evaluate-my-cnn-model
# https://www.tensorflow.org/guide/keras/train_and_evaluate
# evaluate_generator is part of keras module in TensorFlow
model2_test_loss, model2_test_accuracy = model2.evaluate_generator(test_generator, steps=len(X_test) // batch)
print()
print('Test accuracy (FCN, batch size 50, SGD optimizer) = ', round(model2_test_accuracy,3))
print('Test loss (FCN, batch size 50, SGD optimizer) = ', round(model2_test_loss,3))
<ipython-input-51-211b37750733>:5: UserWarning: `Model.evaluate_generator` is deprecated and will be removed in a future version. Please use `Model.evaluate`, which supports generators. model2_test_loss, model2_test_accuracy = model2.evaluate_generator(test_generator, steps=len(X_test) // batch)
Test accuracy (FCN, batch size 50, SGD optimizer) = 0.529 Test loss (FCN, batch size 50, SGD optimizer) = 0.712
df = pd.read_csv(csv_log, delimiter=';')
# Printing out only the epoch with the highest accuracy
fcn02 = df[df['accuracy'] == df['accuracy'].max()]
display(fcn02)
| epoch | accuracy | loss | val_accuracy | val_loss | |
|---|---|---|---|---|---|
| 48 | 48 | 0.516066 | 0.77568 | 0.527805 | 0.713343 |
y_pred = model2.predict(X_test)
y_pred_classes = np.argmax(y_pred, axis=1)
# Generate classification report
target_names = ['cat', 'dog']
print(classification_report(np.argmax(y_test, axis=1), y_pred_classes, target_names=target_names))
82/82 [==============================] - 2s 24ms/step
precision recall f1-score support
cat 0.47 0.99 0.63 1209
dog 0.47 0.01 0.02 1385
accuracy 0.47 2594
macro avg 0.47 0.50 0.33 2594
weighted avg 0.47 0.47 0.31 2594
fcn02_numpy = fcn02.to_numpy()
model2_data = ['FCN (SGD, batch 50)', fcn02_numpy[0][0],
round(fcn02_numpy[0][1], 3), round(fcn02_numpy[0][3], 3), round(model2_test_accuracy, 3),
round(fcn02_numpy[0][2], 3), round(fcn02_numpy[0][4], 3), round(model2_test_loss, 3)]
fcn_results.loc[1,:8] = model2_data
display(fcn_results)
<ipython-input-65-909256f61667>:2: FutureWarning: Slicing a positional slice with .loc is not supported, and will raise TypeError in a future version. Use .loc with labels or .iloc with positions instead. fcn_results.loc[1,:8] = model2_data
| Model | Epoch | Train Accuracy | Valid Accuracy | Test Accuracy | Train Loss | Valid Loss | Test Loss | |
|---|---|---|---|---|---|---|---|---|
| 0 | FCN (Adam, batch 25) | 46 | 0.622 | 0.557 | 0.592 | 0.671 | 0.681 | 0.676 |
| 1 | FCN (SGD, batch 50) | 48 | 0.516 | 0.528 | 0.529 | 0.776 | 0.713 | 0.712 |
model3 = FCN_model(len_classes=2, dropout_rate=0.2)
model3.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.0001),
loss='binary_crossentropy',
metrics=['accuracy'])
log_dir = "logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)
epochs = 50
csv_log = 'training_adam_50_logs.csv'
history = model3.fit_generator(
train_generator,
steps_per_epoch=len(X_train) // batch,
epochs=epochs,
validation_data=valid_generator,
validation_steps=len(X_valid) // batch,
callbacks=[CSVLogger(csv_log, append=False, separator=";"), tensorboard_callback]
)
<ipython-input-60-a39c4326f29e>:5: UserWarning: `Model.fit_generator` is deprecated and will be removed in a future version. Please use `Model.fit`, which supports generators. history = model3.fit_generator(
Epoch 1/50 165/165 [==============================] - 45s 247ms/step - loss: 0.8169 - accuracy: 0.4924 - val_loss: 0.7076 - val_accuracy: 0.5317 Epoch 2/50 165/165 [==============================] - 39s 236ms/step - loss: 0.7296 - accuracy: 0.4955 - val_loss: 0.7058 - val_accuracy: 0.5190 Epoch 3/50 165/165 [==============================] - 41s 246ms/step - loss: 0.7167 - accuracy: 0.4921 - val_loss: 0.7034 - val_accuracy: 0.5317 Epoch 4/50 165/165 [==============================] - 39s 237ms/step - loss: 0.7103 - accuracy: 0.4907 - val_loss: 0.7015 - val_accuracy: 0.5317 Epoch 5/50 165/165 [==============================] - 40s 242ms/step - loss: 0.7060 - accuracy: 0.5039 - val_loss: 0.6998 - val_accuracy: 0.5307 Epoch 6/50 165/165 [==============================] - 40s 240ms/step - loss: 0.7033 - accuracy: 0.5216 - val_loss: 0.6984 - val_accuracy: 0.5327 Epoch 7/50 165/165 [==============================] - 38s 230ms/step - loss: 0.7008 - accuracy: 0.5373 - val_loss: 0.6972 - val_accuracy: 0.5312 Epoch 8/50 165/165 [==============================] - 41s 246ms/step - loss: 0.6993 - accuracy: 0.5357 - val_loss: 0.6963 - val_accuracy: 0.5332 Epoch 9/50 165/165 [==============================] - 38s 232ms/step - loss: 0.6977 - accuracy: 0.5289 - val_loss: 0.6955 - val_accuracy: 0.5312 Epoch 10/50 165/165 [==============================] - 40s 240ms/step - loss: 0.6967 - accuracy: 0.5267 - val_loss: 0.6949 - val_accuracy: 0.5302 Epoch 11/50 165/165 [==============================] - 40s 240ms/step - loss: 0.6957 - accuracy: 0.5256 - val_loss: 0.6944 - val_accuracy: 0.5298 Epoch 12/50 165/165 [==============================] - 39s 236ms/step - loss: 0.6945 - accuracy: 0.5270 - val_loss: 0.6939 - val_accuracy: 0.5327 Epoch 13/50 165/165 [==============================] - 39s 238ms/step - loss: 0.6941 - accuracy: 0.5275 - val_loss: 0.6933 - val_accuracy: 0.5332 Epoch 14/50 165/165 [==============================] - 39s 234ms/step - loss: 0.6934 - accuracy: 0.5260 - val_loss: 0.6931 - val_accuracy: 0.5317 Epoch 15/50 165/165 [==============================] - 38s 231ms/step - loss: 0.6927 - accuracy: 0.5269 - val_loss: 0.6929 - val_accuracy: 0.5312 Epoch 16/50 165/165 [==============================] - 37s 224ms/step - loss: 0.6919 - accuracy: 0.5277 - val_loss: 0.6924 - val_accuracy: 0.5332 Epoch 17/50 165/165 [==============================] - 42s 255ms/step - loss: 0.6916 - accuracy: 0.5279 - val_loss: 0.6922 - val_accuracy: 0.5312 Epoch 18/50 165/165 [==============================] - 40s 239ms/step - loss: 0.6913 - accuracy: 0.5278 - val_loss: 0.6921 - val_accuracy: 0.5293 Epoch 19/50 165/165 [==============================] - 39s 238ms/step - loss: 0.6904 - accuracy: 0.5318 - val_loss: 0.6915 - val_accuracy: 0.5332 Epoch 20/50 165/165 [==============================] - 38s 227ms/step - loss: 0.6903 - accuracy: 0.5311 - val_loss: 0.6918 - val_accuracy: 0.5312 Epoch 21/50 165/165 [==============================] - 37s 226ms/step - loss: 0.6895 - accuracy: 0.5478 - val_loss: 0.6912 - val_accuracy: 0.5346 Epoch 22/50 165/165 [==============================] - 37s 224ms/step - loss: 0.6896 - accuracy: 0.5484 - val_loss: 0.6919 - val_accuracy: 0.5395 Epoch 23/50 165/165 [==============================] - 37s 225ms/step - loss: 0.6894 - accuracy: 0.5448 - val_loss: 0.6915 - val_accuracy: 0.5302 Epoch 24/50 165/165 [==============================] - 38s 228ms/step - loss: 0.6889 - accuracy: 0.5433 - val_loss: 0.6906 - val_accuracy: 0.5312 Epoch 25/50 165/165 [==============================] - 38s 229ms/step - loss: 0.6889 - accuracy: 0.5493 - val_loss: 0.6901 - val_accuracy: 0.5449 Epoch 26/50 165/165 [==============================] - 43s 258ms/step - loss: 0.6878 - accuracy: 0.5568 - val_loss: 0.6903 - val_accuracy: 0.5415 Epoch 27/50 165/165 [==============================] - 38s 228ms/step - loss: 0.6879 - accuracy: 0.5495 - val_loss: 0.6905 - val_accuracy: 0.5341 Epoch 28/50 165/165 [==============================] - 39s 233ms/step - loss: 0.6870 - accuracy: 0.5589 - val_loss: 0.6903 - val_accuracy: 0.5322 Epoch 29/50 165/165 [==============================] - 37s 225ms/step - loss: 0.6869 - accuracy: 0.5510 - val_loss: 0.6892 - val_accuracy: 0.5449 Epoch 30/50 165/165 [==============================] - 37s 225ms/step - loss: 0.6862 - accuracy: 0.5607 - val_loss: 0.6875 - val_accuracy: 0.5488 Epoch 31/50 165/165 [==============================] - 38s 231ms/step - loss: 0.6859 - accuracy: 0.5571 - val_loss: 0.6891 - val_accuracy: 0.5341 Epoch 32/50 165/165 [==============================] - 37s 223ms/step - loss: 0.6860 - accuracy: 0.5517 - val_loss: 0.6897 - val_accuracy: 0.5332 Epoch 33/50 165/165 [==============================] - 37s 226ms/step - loss: 0.6853 - accuracy: 0.5613 - val_loss: 0.6894 - val_accuracy: 0.5468 Epoch 34/50 165/165 [==============================] - 37s 224ms/step - loss: 0.6832 - accuracy: 0.5680 - val_loss: 0.6852 - val_accuracy: 0.5454 Epoch 35/50 165/165 [==============================] - 37s 224ms/step - loss: 0.6833 - accuracy: 0.5668 - val_loss: 0.6882 - val_accuracy: 0.5454 Epoch 36/50 165/165 [==============================] - 37s 224ms/step - loss: 0.6826 - accuracy: 0.5708 - val_loss: 0.6854 - val_accuracy: 0.5332 Epoch 37/50 165/165 [==============================] - 37s 223ms/step - loss: 0.6826 - accuracy: 0.5664 - val_loss: 0.6862 - val_accuracy: 0.5454 Epoch 38/50 165/165 [==============================] - 37s 223ms/step - loss: 0.6823 - accuracy: 0.5677 - val_loss: 0.6829 - val_accuracy: 0.5478 Epoch 39/50 165/165 [==============================] - 37s 225ms/step - loss: 0.6815 - accuracy: 0.5754 - val_loss: 0.6826 - val_accuracy: 0.5390 Epoch 40/50 165/165 [==============================] - 37s 224ms/step - loss: 0.6812 - accuracy: 0.5692 - val_loss: 0.6843 - val_accuracy: 0.5473 Epoch 41/50 165/165 [==============================] - 37s 224ms/step - loss: 0.6812 - accuracy: 0.5805 - val_loss: 0.6811 - val_accuracy: 0.5566 Epoch 42/50 165/165 [==============================] - 37s 226ms/step - loss: 0.6802 - accuracy: 0.5745 - val_loss: 0.6828 - val_accuracy: 0.5566 Epoch 43/50 165/165 [==============================] - 37s 223ms/step - loss: 0.6798 - accuracy: 0.5797 - val_loss: 0.6812 - val_accuracy: 0.5585 Epoch 44/50 165/165 [==============================] - 37s 224ms/step - loss: 0.6803 - accuracy: 0.5754 - val_loss: 0.6804 - val_accuracy: 0.5459 Epoch 45/50 165/165 [==============================] - 37s 222ms/step - loss: 0.6784 - accuracy: 0.5828 - val_loss: 0.6826 - val_accuracy: 0.5532 Epoch 46/50 165/165 [==============================] - 37s 225ms/step - loss: 0.6795 - accuracy: 0.5869 - val_loss: 0.6837 - val_accuracy: 0.5766 Epoch 47/50 165/165 [==============================] - 37s 223ms/step - loss: 0.6790 - accuracy: 0.5869 - val_loss: 0.6778 - val_accuracy: 0.5600 Epoch 48/50 165/165 [==============================] - 37s 224ms/step - loss: 0.6771 - accuracy: 0.5860 - val_loss: 0.6798 - val_accuracy: 0.5366 Epoch 49/50 165/165 [==============================] - 37s 223ms/step - loss: 0.6767 - accuracy: 0.5863 - val_loss: 0.6768 - val_accuracy: 0.5683 Epoch 50/50 165/165 [==============================] - 37s 225ms/step - loss: 0.6762 - accuracy: 0.5814 - val_loss: 0.6775 - val_accuracy: 0.5878
# https://stackoverflow.com/questions/63684459/should-i-use-evaluate-generator-or-evaluate-to-evaluate-my-cnn-model
# https://www.tensorflow.org/guide/keras/train_and_evaluate
# evaluate_generator is part of keras module in TensorFlow
model3_test_loss, model3_test_accuracy = model3.evaluate_generator(test_generator, steps=len(X_test) // batch)
print()
print('Test accuracy (FCN, batch size 50, Adam optimizer) = ', round(model3_test_accuracy,3))
print('Test loss (FCN, batch size 50, Adam optimizer) = ', round(model3_test_loss,3))
<ipython-input-61-c764f886d1b7>:5: UserWarning: `Model.evaluate_generator` is deprecated and will be removed in a future version. Please use `Model.evaluate`, which supports generators. model3_test_loss, model3_test_accuracy = model3.evaluate_generator(test_generator, steps=len(X_test) // batch)
Test accuracy (FCN, batch size 50, Adam optimizer) = 0.578 Test loss (FCN, batch size 50, Adam optimizer) = 0.681
df = pd.read_csv(csv_log, delimiter=';')
# Printing out only the epoch with the highest accuracy
fcn03 = df[df['accuracy'] == df['accuracy'].max()]
display(fcn03)
| epoch | accuracy | loss | val_accuracy | val_loss | |
|---|---|---|---|---|---|
| 45 | 45 | 0.58688 | 0.679518 | 0.576585 | 0.68371 |
| 46 | 46 | 0.58688 | 0.678986 | 0.560000 | 0.67781 |
y_pred = model3.predict(X_test)
y_pred_classes = np.argmax(y_pred, axis=1)
# Generate classification report
target_names = ['cat', 'dog']
print(classification_report(np.argmax(y_test, axis=1), y_pred_classes, target_names=target_names))
82/82 [==============================] - 2s 24ms/step
precision recall f1-score support
cat 0.49 0.75 0.59 1209
dog 0.59 0.31 0.41 1385
accuracy 0.52 2594
macro avg 0.54 0.53 0.50 2594
weighted avg 0.54 0.52 0.49 2594
fcn03_numpy = fcn03.to_numpy()
model3_data = ['FCN (Adam, batch 50)', fcn03_numpy[0][0],
round(fcn03_numpy[0][1], 3), round(fcn03_numpy[0][3], 3), round(model1_test_accuracy, 3),
round(fcn03_numpy[0][2], 3), round(fcn03_numpy[0][4], 3), round(model1_test_loss, 3)]
fcn_results.loc[2,:8] = model3_data
display(fcn_results)
<ipython-input-68-01d14541c68e>:6: FutureWarning: Slicing a positional slice with .loc is not supported, and will raise TypeError in a future version. Use .loc with labels or .iloc with positions instead. fcn_results.loc[2,:8] = model3_data
| Model | Epoch | Train Accuracy | Valid Accuracy | Test Accuracy | Train Loss | Valid Loss | Test Loss | |
|---|---|---|---|---|---|---|---|---|
| 0 | FCN (Adam, batch 25) | 46 | 0.622 | 0.557 | 0.592 | 0.671 | 0.681 | 0.676 |
| 1 | FCN (SGD, batch 50) | 48 | 0.516 | 0.528 | 0.529 | 0.776 | 0.713 | 0.712 |
| 2 | FCN (Adam, batch 50) | 45.0 | 0.587 | 0.577 | 0.592 | 0.68 | 0.684 | 0.676 |
Several experiments outside of this notebook were done to establish optimal dropout rate, number of hidden layers, and focusing on a larger number of epochs. Adam optimizer was assumed to perform the best but stochastic gradient descent (SGD) was used for comparison. The two models with Adam optimizers performed better based on training and testing accuracies and lower overall loss functions.
Their testing accuracies and loss functions were exactly the same 0.592 and 0.676. The model with batch size of 25 had slightly higher training accuracy 0.622 (compared to the validation and testing accuracies) which suggests a slight overfitting but its validation and testing accuracies are the same as the other Adam model with larger batch size. Also when taken into account precision from the confusion matrix was slightly higher at recognizing cats and dogs, the last model with Adam optimizer and batch size of 50 performed the best from the three available ones.
display(fcn_results)
| Model | Epoch | Train Accuracy | Valid Accuracy | Test Accuracy | Train Loss | Valid Loss | Test Loss | |
|---|---|---|---|---|---|---|---|---|
| 0 | FCN (Adam, batch 25) | 46 | 0.622 | 0.557 | 0.592 | 0.671 | 0.681 | 0.676 |
| 1 | FCN (SGD, batch 50) | 48 | 0.516 | 0.528 | 0.529 | 0.776 | 0.713 | 0.712 |
| 2 | FCN (Adam, batch 50) | 45.0 | 0.587 | 0.577 | 0.592 | 0.68 | 0.684 | 0.676 |
Confusion matrix of the last model (FCN (Adam, batch 50)):
# Started another instance of TensorBoard becuase the previous one had some issues
log_dir = "logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)
%tensorboard --logdir logs/fit --port 6007
colab_path = '/content/drive/MyDrive/aml'
train = colab_path + '/' + 'training_set/'
test = colab_path + '/' + 'test_set/'
cates = ['dogs', 'cats']
def load_images_and_labels(data_path, cates):
X = []
y = []
i = 0
for index, cate in enumerate(cates):
for img_name in os.listdir(data_path + cate):
i = i +1
print(i)
img = cv2.imread(data_path + cate + '/' + img_name)
if img is not None:
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img_array = Image.fromarray(img, 'RGB')
# resize image to 227x227 which is the required input size of the Alexnet model
img_rs = img_array.resize((227,227))
# convert the image to array
img_rs = np.array(img_rs)
X.append(img_rs)
y.append(index)
return X, y
X_train, y_train = load_images_and_labels(train, cates)
Streaming output truncated to the last 5000 lines.
3008
3009
3010
3011
3012
3013
3014
3015
3016
3017
3018
3019
3020
3021
3022
3023
3024
3025
3026
3027
3028
3029
3030
3031
3032
3033
3034
3035
3036
3037
3038
3039
3040
3041
3042
3043
3044
3045
3046
3047
3048
3049
3050
3051
3052
3053
3054
3055
3056
3057
3058
3059
3060
3061
3062
3063
3064
3065
3066
3067
3068
3069
3070
3071
3072
3073
3074
3075
3076
3077
3078
3079
3080
3081
3082
3083
3084
3085
3086
3087
3088
3089
3090
3091
3092
3093
3094
3095
3096
3097
3098
3099
3100
3101
3102
3103
3104
3105
3106
3107
3108
3109
3110
3111
3112
3113
3114
3115
3116
3117
3118
3119
3120
3121
3122
3123
3124
3125
3126
3127
3128
3129
3130
3131
3132
3133
3134
3135
3136
3137
3138
3139
3140
3141
3142
3143
3144
3145
3146
3147
3148
3149
3150
3151
3152
3153
3154
3155
3156
3157
3158
3159
3160
3161
3162
3163
3164
3165
3166
3167
3168
3169
3170
3171
3172
3173
3174
3175
3176
3177
3178
3179
3180
3181
3182
3183
3184
3185
3186
3187
3188
3189
3190
3191
3192
3193
3194
3195
3196
3197
3198
3199
3200
3201
3202
3203
3204
3205
3206
3207
3208
3209
3210
3211
3212
3213
3214
3215
3216
3217
3218
3219
3220
3221
3222
3223
3224
3225
3226
3227
3228
3229
3230
3231
3232
3233
3234
3235
3236
3237
3238
3239
3240
3241
3242
3243
3244
3245
3246
3247
3248
3249
3250
3251
3252
3253
3254
3255
3256
3257
3258
3259
3260
3261
3262
3263
3264
3265
3266
3267
3268
3269
3270
3271
3272
3273
3274
3275
3276
3277
3278
3279
3280
3281
3282
3283
3284
3285
3286
3287
3288
3289
3290
3291
3292
3293
3294
3295
3296
3297
3298
3299
3300
3301
3302
3303
3304
3305
3306
3307
3308
3309
3310
3311
3312
3313
3314
3315
3316
3317
3318
3319
3320
3321
3322
3323
3324
3325
3326
3327
3328
3329
3330
3331
3332
3333
3334
3335
3336
3337
3338
3339
3340
3341
3342
3343
3344
3345
3346
3347
3348
3349
3350
3351
3352
3353
3354
3355
3356
3357
3358
3359
3360
3361
3362
3363
3364
3365
3366
3367
3368
3369
3370
3371
3372
3373
3374
3375
3376
3377
3378
3379
3380
3381
3382
3383
3384
3385
3386
3387
3388
3389
3390
3391
3392
3393
3394
3395
3396
3397
3398
3399
3400
3401
3402
3403
3404
3405
3406
3407
3408
3409
3410
3411
3412
3413
3414
3415
3416
3417
3418
3419
3420
3421
3422
3423
3424
3425
3426
3427
3428
3429
3430
3431
3432
3433
3434
3435
3436
3437
3438
3439
3440
3441
3442
3443
3444
3445
3446
3447
3448
3449
3450
3451
3452
3453
3454
3455
3456
3457
3458
3459
3460
3461
3462
3463
3464
3465
3466
3467
3468
3469
3470
3471
3472
3473
3474
3475
3476
3477
3478
3479
3480
3481
3482
3483
3484
3485
3486
3487
3488
3489
3490
3491
3492
3493
3494
3495
3496
3497
3498
3499
3500
3501
3502
3503
3504
3505
3506
3507
3508
3509
3510
3511
3512
3513
3514
3515
3516
3517
3518
3519
3520
3521
3522
3523
3524
3525
3526
3527
3528
3529
3530
3531
3532
3533
3534
3535
3536
3537
3538
3539
3540
3541
3542
3543
3544
3545
3546
3547
3548
3549
3550
3551
3552
3553
3554
3555
3556
3557
3558
3559
3560
3561
3562
3563
3564
3565
3566
3567
3568
3569
3570
3571
3572
3573
3574
3575
3576
3577
3578
3579
3580
3581
3582
3583
3584
3585
3586
3587
3588
3589
3590
3591
3592
3593
3594
3595
3596
3597
3598
3599
3600
3601
3602
3603
3604
3605
3606
3607
3608
3609
3610
3611
3612
3613
3614
3615
3616
3617
3618
3619
3620
3621
3622
3623
3624
3625
3626
3627
3628
3629
3630
3631
3632
3633
3634
3635
3636
3637
3638
3639
3640
3641
3642
3643
3644
3645
3646
3647
3648
3649
3650
3651
3652
3653
3654
3655
3656
3657
3658
3659
3660
3661
3662
3663
3664
3665
3666
3667
3668
3669
3670
3671
3672
3673
3674
3675
3676
3677
3678
3679
3680
3681
3682
3683
3684
3685
3686
3687
3688
3689
3690
3691
3692
3693
3694
3695
3696
3697
3698
3699
3700
3701
3702
3703
3704
3705
3706
3707
3708
3709
3710
3711
3712
3713
3714
3715
3716
3717
3718
3719
3720
3721
3722
3723
3724
3725
3726
3727
3728
3729
3730
3731
3732
3733
3734
3735
3736
3737
3738
3739
3740
3741
3742
3743
3744
3745
3746
3747
3748
3749
3750
3751
3752
3753
3754
3755
3756
3757
3758
3759
3760
3761
3762
3763
3764
3765
3766
3767
3768
3769
3770
3771
3772
3773
3774
3775
3776
3777
3778
3779
3780
3781
3782
3783
3784
3785
3786
3787
3788
3789
3790
3791
3792
3793
3794
3795
3796
3797
3798
3799
3800
3801
3802
3803
3804
3805
3806
3807
3808
3809
3810
3811
3812
3813
3814
3815
3816
3817
3818
3819
3820
3821
3822
3823
3824
3825
3826
3827
3828
3829
3830
3831
3832
3833
3834
3835
3836
3837
3838
3839
3840
3841
3842
3843
3844
3845
3846
3847
3848
3849
3850
3851
3852
3853
3854
3855
3856
3857
3858
3859
3860
3861
3862
3863
3864
3865
3866
3867
3868
3869
3870
3871
3872
3873
3874
3875
3876
3877
3878
3879
3880
3881
3882
3883
3884
3885
3886
3887
3888
3889
3890
3891
3892
3893
3894
3895
3896
3897
3898
3899
3900
3901
3902
3903
3904
3905
3906
3907
3908
3909
3910
3911
3912
3913
3914
3915
3916
3917
3918
3919
3920
3921
3922
3923
3924
3925
3926
3927
3928
3929
3930
3931
3932
3933
3934
3935
3936
3937
3938
3939
3940
3941
3942
3943
3944
3945
3946
3947
3948
3949
3950
3951
3952
3953
3954
3955
3956
3957
3958
3959
3960
3961
3962
3963
3964
3965
3966
3967
3968
3969
3970
3971
3972
3973
3974
3975
3976
3977
3978
3979
3980
3981
3982
3983
3984
3985
3986
3987
3988
3989
3990
3991
3992
3993
3994
3995
3996
3997
3998
3999
4000
4001
4002
4003
4004
4005
4006
4007
4008
4009
4010
4011
4012
4013
4014
4015
4016
4017
4018
4019
4020
4021
4022
4023
4024
4025
4026
4027
4028
4029
4030
4031
4032
4033
4034
4035
4036
4037
4038
4039
4040
4041
4042
4043
4044
4045
4046
4047
4048
4049
4050
4051
4052
4053
4054
4055
4056
4057
4058
4059
4060
4061
4062
4063
4064
4065
4066
4067
4068
4069
4070
4071
4072
4073
4074
4075
4076
4077
4078
4079
4080
4081
4082
4083
4084
4085
4086
4087
4088
4089
4090
4091
4092
4093
4094
4095
4096
4097
4098
4099
4100
4101
4102
4103
4104
4105
4106
4107
4108
4109
4110
4111
4112
4113
4114
4115
4116
4117
4118
4119
4120
4121
4122
4123
4124
4125
4126
4127
4128
4129
4130
4131
4132
4133
4134
4135
4136
4137
4138
4139
4140
4141
4142
4143
4144
4145
4146
4147
4148
4149
4150
4151
4152
4153
4154
4155
4156
4157
4158
4159
4160
4161
4162
4163
4164
4165
4166
4167
4168
4169
4170
4171
4172
4173
4174
4175
4176
4177
4178
4179
4180
4181
4182
4183
4184
4185
4186
4187
4188
4189
4190
4191
4192
4193
4194
4195
4196
4197
4198
4199
4200
4201
4202
4203
4204
4205
4206
4207
4208
4209
4210
4211
4212
4213
4214
4215
4216
4217
4218
4219
4220
4221
4222
4223
4224
4225
4226
4227
4228
4229
4230
4231
4232
4233
4234
4235
4236
4237
4238
4239
4240
4241
4242
4243
4244
4245
4246
4247
4248
4249
4250
4251
4252
4253
4254
4255
4256
4257
4258
4259
4260
4261
4262
4263
4264
4265
4266
4267
4268
4269
4270
4271
4272
4273
4274
4275
4276
4277
4278
4279
4280
4281
4282
4283
4284
4285
4286
4287
4288
4289
4290
4291
4292
4293
4294
4295
4296
4297
4298
4299
4300
4301
4302
4303
4304
4305
4306
4307
4308
4309
4310
4311
4312
4313
4314
4315
4316
4317
4318
4319
4320
4321
4322
4323
4324
4325
4326
4327
4328
4329
4330
4331
4332
4333
4334
4335
4336
4337
4338
4339
4340
4341
4342
4343
4344
4345
4346
4347
4348
4349
4350
4351
4352
4353
4354
4355
4356
4357
4358
4359
4360
4361
4362
4363
4364
4365
4366
4367
4368
4369
4370
4371
4372
4373
4374
4375
4376
4377
4378
4379
4380
4381
4382
4383
4384
4385
4386
4387
4388
4389
4390
4391
4392
4393
4394
4395
4396
4397
4398
4399
4400
4401
4402
4403
4404
4405
4406
4407
4408
4409
4410
4411
4412
4413
4414
4415
4416
4417
4418
4419
4420
4421
4422
4423
4424
4425
4426
4427
4428
4429
4430
4431
4432
4433
4434
4435
4436
4437
4438
4439
4440
4441
4442
4443
4444
4445
4446
4447
4448
4449
4450
4451
4452
4453
4454
4455
4456
4457
4458
4459
4460
4461
4462
4463
4464
4465
4466
4467
4468
4469
4470
4471
4472
4473
4474
4475
4476
4477
4478
4479
4480
4481
4482
4483
4484
4485
4486
4487
4488
4489
4490
4491
4492
4493
4494
4495
4496
4497
4498
4499
4500
4501
4502
4503
4504
4505
4506
4507
4508
4509
4510
4511
4512
4513
4514
4515
4516
4517
4518
4519
4520
4521
4522
4523
4524
4525
4526
4527
4528
4529
4530
4531
4532
4533
4534
4535
4536
4537
4538
4539
4540
4541
4542
4543
4544
4545
4546
4547
4548
4549
4550
4551
4552
4553
4554
4555
4556
4557
4558
4559
4560
4561
4562
4563
4564
4565
4566
4567
4568
4569
4570
4571
4572
4573
4574
4575
4576
4577
4578
4579
4580
4581
4582
4583
4584
4585
4586
4587
4588
4589
4590
4591
4592
4593
4594
4595
4596
4597
4598
4599
4600
4601
4602
4603
4604
4605
4606
4607
4608
4609
4610
4611
4612
4613
4614
4615
4616
4617
4618
4619
4620
4621
4622
4623
4624
4625
4626
4627
4628
4629
4630
4631
4632
4633
4634
4635
4636
4637
4638
4639
4640
4641
4642
4643
4644
4645
4646
4647
4648
4649
4650
4651
4652
4653
4654
4655
4656
4657
4658
4659
4660
4661
4662
4663
4664
4665
4666
4667
4668
4669
4670
4671
4672
4673
4674
4675
4676
4677
4678
4679
4680
4681
4682
4683
4684
4685
4686
4687
4688
4689
4690
4691
4692
4693
4694
4695
4696
4697
4698
4699
4700
4701
4702
4703
4704
4705
4706
4707
4708
4709
4710
4711
4712
4713
4714
4715
4716
4717
4718
4719
4720
4721
4722
4723
4724
4725
4726
4727
4728
4729
4730
4731
4732
4733
4734
4735
4736
4737
4738
4739
4740
4741
4742
4743
4744
4745
4746
4747
4748
4749
4750
4751
4752
4753
4754
4755
4756
4757
4758
4759
4760
4761
4762
4763
4764
4765
4766
4767
4768
4769
4770
4771
4772
4773
4774
4775
4776
4777
4778
4779
4780
4781
4782
4783
4784
4785
4786
4787
4788
4789
4790
4791
4792
4793
4794
4795
4796
4797
4798
4799
4800
4801
4802
4803
4804
4805
4806
4807
4808
4809
4810
4811
4812
4813
4814
4815
4816
4817
4818
4819
4820
4821
4822
4823
4824
4825
4826
4827
4828
4829
4830
4831
4832
4833
4834
4835
4836
4837
4838
4839
4840
4841
4842
4843
4844
4845
4846
4847
4848
4849
4850
4851
4852
4853
4854
4855
4856
4857
4858
4859
4860
4861
4862
4863
4864
4865
4866
4867
4868
4869
4870
4871
4872
4873
4874
4875
4876
4877
4878
4879
4880
4881
4882
4883
4884
4885
4886
4887
4888
4889
4890
4891
4892
4893
4894
4895
4896
4897
4898
4899
4900
4901
4902
4903
4904
4905
4906
4907
4908
4909
4910
4911
4912
4913
4914
4915
4916
4917
4918
4919
4920
4921
4922
4923
4924
4925
4926
4927
4928
4929
4930
4931
4932
4933
4934
4935
4936
4937
4938
4939
4940
4941
4942
4943
4944
4945
4946
4947
4948
4949
4950
4951
4952
4953
4954
4955
4956
4957
4958
4959
4960
4961
4962
4963
4964
4965
4966
4967
4968
4969
4970
4971
4972
4973
4974
4975
4976
4977
4978
4979
4980
4981
4982
4983
4984
4985
4986
4987
4988
4989
4990
4991
4992
4993
4994
4995
4996
4997
4998
4999
5000
5001
5002
5003
5004
5005
5006
5007
5008
5009
5010
5011
5012
5013
5014
5015
5016
5017
5018
5019
5020
5021
5022
5023
5024
5025
5026
5027
5028
5029
5030
5031
5032
5033
5034
5035
5036
5037
5038
5039
5040
5041
5042
5043
5044
5045
5046
5047
5048
5049
5050
5051
5052
5053
5054
5055
5056
5057
5058
5059
5060
5061
5062
5063
5064
5065
5066
5067
5068
5069
5070
5071
5072
5073
5074
5075
5076
5077
5078
5079
5080
5081
5082
5083
5084
5085
5086
5087
5088
5089
5090
5091
5092
5093
5094
5095
5096
5097
5098
5099
5100
5101
5102
5103
5104
5105
5106
5107
5108
5109
5110
5111
5112
5113
5114
5115
5116
5117
5118
5119
5120
5121
5122
5123
5124
5125
5126
5127
5128
5129
5130
5131
5132
5133
5134
5135
5136
5137
5138
5139
5140
5141
5142
5143
5144
5145
5146
5147
5148
5149
5150
5151
5152
5153
5154
5155
5156
5157
5158
5159
5160
5161
5162
5163
5164
5165
5166
5167
5168
5169
5170
5171
5172
5173
5174
5175
5176
5177
5178
5179
5180
5181
5182
5183
5184
5185
5186
5187
5188
5189
5190
5191
5192
5193
5194
5195
5196
5197
5198
5199
5200
5201
5202
5203
5204
5205
5206
5207
5208
5209
5210
5211
5212
5213
5214
5215
5216
5217
5218
5219
5220
5221
5222
5223
5224
5225
5226
5227
5228
5229
5230
5231
5232
5233
5234
5235
5236
5237
5238
5239
5240
5241
5242
5243
5244
5245
5246
5247
5248
5249
5250
5251
5252
5253
5254
5255
5256
5257
5258
5259
5260
5261
5262
5263
5264
5265
5266
5267
5268
5269
5270
5271
5272
5273
5274
5275
5276
5277
5278
5279
5280
5281
5282
5283
5284
5285
5286
5287
5288
5289
5290
5291
5292
5293
5294
5295
5296
5297
5298
5299
5300
5301
5302
5303
5304
5305
5306
5307
5308
5309
5310
5311
5312
5313
5314
5315
5316
5317
5318
5319
5320
5321
5322
5323
5324
5325
5326
5327
5328
5329
5330
5331
5332
5333
5334
5335
5336
5337
5338
5339
5340
5341
5342
5343
5344
5345
5346
5347
5348
5349
5350
5351
5352
5353
5354
5355
5356
5357
5358
5359
5360
5361
5362
5363
5364
5365
5366
5367
5368
5369
5370
5371
5372
5373
5374
5375
5376
5377
5378
5379
5380
5381
5382
5383
5384
5385
5386
5387
5388
5389
5390
5391
5392
5393
5394
5395
5396
5397
5398
5399
5400
5401
5402
5403
5404
5405
5406
5407
5408
5409
5410
5411
5412
5413
5414
5415
5416
5417
5418
5419
5420
5421
5422
5423
5424
5425
5426
5427
5428
5429
5430
5431
5432
5433
5434
5435
5436
5437
5438
5439
5440
5441
5442
5443
5444
5445
5446
5447
5448
5449
5450
5451
5452
5453
5454
5455
5456
5457
5458
5459
5460
5461
5462
5463
5464
5465
5466
5467
5468
5469
5470
5471
5472
5473
5474
5475
5476
5477
5478
5479
5480
5481
5482
5483
5484
5485
5486
5487
5488
5489
5490
5491
5492
5493
5494
5495
5496
5497
5498
5499
5500
5501
5502
5503
5504
5505
5506
5507
5508
5509
5510
5511
5512
5513
5514
5515
5516
5517
5518
5519
5520
5521
5522
5523
5524
5525
5526
5527
5528
5529
5530
5531
5532
5533
5534
5535
5536
5537
5538
5539
5540
5541
5542
5543
5544
5545
5546
5547
5548
5549
5550
5551
5552
5553
5554
5555
5556
5557
5558
5559
5560
5561
5562
5563
5564
5565
5566
5567
5568
5569
5570
5571
5572
5573
5574
5575
5576
5577
5578
5579
5580
5581
5582
5583
5584
5585
5586
5587
5588
5589
5590
5591
5592
5593
5594
5595
5596
5597
5598
5599
5600
5601
5602
5603
5604
5605
5606
5607
5608
5609
5610
5611
5612
5613
5614
5615
5616
5617
5618
5619
5620
5621
5622
5623
5624
5625
5626
5627
5628
5629
5630
5631
5632
5633
5634
5635
5636
5637
5638
5639
5640
5641
5642
5643
5644
5645
5646
5647
5648
5649
5650
5651
5652
5653
5654
5655
5656
5657
5658
5659
5660
5661
5662
5663
5664
5665
5666
5667
5668
5669
5670
5671
5672
5673
5674
5675
5676
5677
5678
5679
5680
5681
5682
5683
5684
5685
5686
5687
5688
5689
5690
5691
5692
5693
5694
5695
5696
5697
5698
5699
5700
5701
5702
5703
5704
5705
5706
5707
5708
5709
5710
5711
5712
5713
5714
5715
5716
5717
5718
5719
5720
5721
5722
5723
5724
5725
5726
5727
5728
5729
5730
5731
5732
5733
5734
5735
5736
5737
5738
5739
5740
5741
5742
5743
5744
5745
5746
5747
5748
5749
5750
5751
5752
5753
5754
5755
5756
5757
5758
5759
5760
5761
5762
5763
5764
5765
5766
5767
5768
5769
5770
5771
5772
5773
5774
5775
5776
5777
5778
5779
5780
5781
5782
5783
5784
5785
5786
5787
5788
5789
5790
5791
5792
5793
5794
5795
5796
5797
5798
5799
5800
5801
5802
5803
5804
5805
5806
5807
5808
5809
5810
5811
5812
5813
5814
5815
5816
5817
5818
5819
5820
5821
5822
5823
5824
5825
5826
5827
5828
5829
5830
5831
5832
5833
5834
5835
5836
5837
5838
5839
5840
5841
5842
5843
5844
5845
5846
5847
5848
5849
5850
5851
5852
5853
5854
5855
5856
5857
5858
5859
5860
5861
5862
5863
5864
5865
5866
5867
5868
5869
5870
5871
5872
5873
5874
5875
5876
5877
5878
5879
5880
5881
5882
5883
5884
5885
5886
5887
5888
5889
5890
5891
5892
5893
5894
5895
5896
5897
5898
5899
5900
5901
5902
5903
5904
5905
5906
5907
5908
5909
5910
5911
5912
5913
5914
5915
5916
5917
5918
5919
5920
5921
5922
5923
5924
5925
5926
5927
5928
5929
5930
5931
5932
5933
5934
5935
5936
5937
5938
5939
5940
5941
5942
5943
5944
5945
5946
5947
5948
5949
5950
5951
5952
5953
5954
5955
5956
5957
5958
5959
5960
5961
5962
5963
5964
5965
5966
5967
5968
5969
5970
5971
5972
5973
5974
5975
5976
5977
5978
5979
5980
5981
5982
5983
5984
5985
5986
5987
5988
5989
5990
5991
5992
5993
5994
5995
5996
5997
5998
5999
6000
6001
6002
6003
6004
6005
6006
6007
6008
6009
6010
6011
6012
6013
6014
6015
6016
6017
6018
6019
6020
6021
6022
6023
6024
6025
6026
6027
6028
6029
6030
6031
6032
6033
6034
6035
6036
6037
6038
6039
6040
6041
6042
6043
6044
6045
6046
6047
6048
6049
6050
6051
6052
6053
6054
6055
6056
6057
6058
6059
6060
6061
6062
6063
6064
6065
6066
6067
6068
6069
6070
6071
6072
6073
6074
6075
6076
6077
6078
6079
6080
6081
6082
6083
6084
6085
6086
6087
6088
6089
6090
6091
6092
6093
6094
6095
6096
6097
6098
6099
6100
6101
6102
6103
6104
6105
6106
6107
6108
6109
6110
6111
6112
6113
6114
6115
6116
6117
6118
6119
6120
6121
6122
6123
6124
6125
6126
6127
6128
6129
6130
6131
6132
6133
6134
6135
6136
6137
6138
6139
6140
6141
6142
6143
6144
6145
6146
6147
6148
6149
6150
6151
6152
6153
6154
6155
6156
6157
6158
6159
6160
6161
6162
6163
6164
6165
6166
6167
6168
6169
6170
6171
6172
6173
6174
6175
6176
6177
6178
6179
6180
6181
6182
6183
6184
6185
6186
6187
6188
6189
6190
6191
6192
6193
6194
6195
6196
6197
6198
6199
6200
6201
6202
6203
6204
6205
6206
6207
6208
6209
6210
6211
6212
6213
6214
6215
6216
6217
6218
6219
6220
6221
6222
6223
6224
6225
6226
6227
6228
6229
6230
6231
6232
6233
6234
6235
6236
6237
6238
6239
6240
6241
6242
6243
6244
6245
6246
6247
6248
6249
6250
6251
6252
6253
6254
6255
6256
6257
6258
6259
6260
6261
6262
6263
6264
6265
6266
6267
6268
6269
6270
6271
6272
6273
6274
6275
6276
6277
6278
6279
6280
6281
6282
6283
6284
6285
6286
6287
6288
6289
6290
6291
6292
6293
6294
6295
6296
6297
6298
6299
6300
6301
6302
6303
6304
6305
6306
6307
6308
6309
6310
6311
6312
6313
6314
6315
6316
6317
6318
6319
6320
6321
6322
6323
6324
6325
6326
6327
6328
6329
6330
6331
6332
6333
6334
6335
6336
6337
6338
6339
6340
6341
6342
6343
6344
6345
6346
6347
6348
6349
6350
6351
6352
6353
6354
6355
6356
6357
6358
6359
6360
6361
6362
6363
6364
6365
6366
6367
6368
6369
6370
6371
6372
6373
6374
6375
6376
6377
6378
6379
6380
6381
6382
6383
6384
6385
6386
6387
6388
6389
6390
6391
6392
6393
6394
6395
6396
6397
6398
6399
6400
6401
6402
6403
6404
6405
6406
6407
6408
6409
6410
6411
6412
6413
6414
6415
6416
6417
6418
6419
6420
6421
6422
6423
6424
6425
6426
6427
6428
6429
6430
6431
6432
6433
6434
6435
6436
6437
6438
6439
6440
6441
6442
6443
6444
6445
6446
6447
6448
6449
6450
6451
6452
6453
6454
6455
6456
6457
6458
6459
6460
6461
6462
6463
6464
6465
6466
6467
6468
6469
6470
6471
6472
6473
6474
6475
6476
6477
6478
6479
6480
6481
6482
6483
6484
6485
6486
6487
6488
6489
6490
6491
6492
6493
6494
6495
6496
6497
6498
6499
6500
6501
6502
6503
6504
6505
6506
6507
6508
6509
6510
6511
6512
6513
6514
6515
6516
6517
6518
6519
6520
6521
6522
6523
6524
6525
6526
6527
6528
6529
6530
6531
6532
6533
6534
6535
6536
6537
6538
6539
6540
6541
6542
6543
6544
6545
6546
6547
6548
6549
6550
6551
6552
6553
6554
6555
6556
6557
6558
6559
6560
6561
6562
6563
6564
6565
6566
6567
6568
6569
6570
6571
6572
6573
6574
6575
6576
6577
6578
6579
6580
6581
6582
6583
6584
6585
6586
6587
6588
6589
6590
6591
6592
6593
6594
6595
6596
6597
6598
6599
6600
6601
6602
6603
6604
6605
6606
6607
6608
6609
6610
6611
6612
6613
6614
6615
6616
6617
6618
6619
6620
6621
6622
6623
6624
6625
6626
6627
6628
6629
6630
6631
6632
6633
6634
6635
6636
6637
6638
6639
6640
6641
6642
6643
6644
6645
6646
6647
6648
6649
6650
6651
6652
6653
6654
6655
6656
6657
6658
6659
6660
6661
6662
6663
6664
6665
6666
6667
6668
6669
6670
6671
6672
6673
6674
6675
6676
6677
6678
6679
6680
6681
6682
6683
6684
6685
6686
6687
6688
6689
6690
6691
6692
6693
6694
6695
6696
6697
6698
6699
6700
6701
6702
6703
6704
6705
6706
6707
6708
6709
6710
6711
6712
6713
6714
6715
6716
6717
6718
6719
6720
6721
6722
6723
6724
6725
6726
6727
6728
6729
6730
6731
6732
6733
6734
6735
6736
6737
6738
6739
6740
6741
6742
6743
6744
6745
6746
6747
6748
6749
6750
6751
6752
6753
6754
6755
6756
6757
6758
6759
6760
6761
6762
6763
6764
6765
6766
6767
6768
6769
6770
6771
6772
6773
6774
6775
6776
6777
6778
6779
6780
6781
6782
6783
6784
6785
6786
6787
6788
6789
6790
6791
6792
6793
6794
6795
6796
6797
6798
6799
6800
6801
6802
6803
6804
6805
6806
6807
6808
6809
6810
6811
6812
6813
6814
6815
6816
6817
6818
6819
6820
6821
6822
6823
6824
6825
6826
6827
6828
6829
6830
6831
6832
6833
6834
6835
6836
6837
6838
6839
6840
6841
6842
6843
6844
6845
6846
6847
6848
6849
6850
6851
6852
6853
6854
6855
6856
6857
6858
6859
6860
6861
6862
6863
6864
6865
6866
6867
6868
6869
6870
6871
6872
6873
6874
6875
6876
6877
6878
6879
6880
6881
6882
6883
6884
6885
6886
6887
6888
6889
6890
6891
6892
6893
6894
6895
6896
6897
6898
6899
6900
6901
6902
6903
6904
6905
6906
6907
6908
6909
6910
6911
6912
6913
6914
6915
6916
6917
6918
6919
6920
6921
6922
6923
6924
6925
6926
6927
6928
6929
6930
6931
6932
6933
6934
6935
6936
6937
6938
6939
6940
6941
6942
6943
6944
6945
6946
6947
6948
6949
6950
6951
6952
6953
6954
6955
6956
6957
6958
6959
6960
6961
6962
6963
6964
6965
6966
6967
6968
6969
6970
6971
6972
6973
6974
6975
6976
6977
6978
6979
6980
6981
6982
6983
6984
6985
6986
6987
6988
6989
6990
6991
6992
6993
6994
6995
6996
6997
6998
6999
7000
7001
7002
7003
7004
7005
7006
7007
7008
7009
7010
7011
7012
7013
7014
7015
7016
7017
7018
7019
7020
7021
7022
7023
7024
7025
7026
7027
7028
7029
7030
7031
7032
7033
7034
7035
7036
7037
7038
7039
7040
7041
7042
7043
7044
7045
7046
7047
7048
7049
7050
7051
7052
7053
7054
7055
7056
7057
7058
7059
7060
7061
7062
7063
7064
7065
7066
7067
7068
7069
7070
7071
7072
7073
7074
7075
7076
7077
7078
7079
7080
7081
7082
7083
7084
7085
7086
7087
7088
7089
7090
7091
7092
7093
7094
7095
7096
7097
7098
7099
7100
7101
7102
7103
7104
7105
7106
7107
7108
7109
7110
7111
7112
7113
7114
7115
7116
7117
7118
7119
7120
7121
7122
7123
7124
7125
7126
7127
7128
7129
7130
7131
7132
7133
7134
7135
7136
7137
7138
7139
7140
7141
7142
7143
7144
7145
7146
7147
7148
7149
7150
7151
7152
7153
7154
7155
7156
7157
7158
7159
7160
7161
7162
7163
7164
7165
7166
7167
7168
7169
7170
7171
7172
7173
7174
7175
7176
7177
7178
7179
7180
7181
7182
7183
7184
7185
7186
7187
7188
7189
7190
7191
7192
7193
7194
7195
7196
7197
7198
7199
7200
7201
7202
7203
7204
7205
7206
7207
7208
7209
7210
7211
7212
7213
7214
7215
7216
7217
7218
7219
7220
7221
7222
7223
7224
7225
7226
7227
7228
7229
7230
7231
7232
7233
7234
7235
7236
7237
7238
7239
7240
7241
7242
7243
7244
7245
7246
7247
7248
7249
7250
7251
7252
7253
7254
7255
7256
7257
7258
7259
7260
7261
7262
7263
7264
7265
7266
7267
7268
7269
7270
7271
7272
7273
7274
7275
7276
7277
7278
7279
7280
7281
7282
7283
7284
7285
7286
7287
7288
7289
7290
7291
7292
7293
7294
7295
7296
7297
7298
7299
7300
7301
7302
7303
7304
7305
7306
7307
7308
7309
7310
7311
7312
7313
7314
7315
7316
7317
7318
7319
7320
7321
7322
7323
7324
7325
7326
7327
7328
7329
7330
7331
7332
7333
7334
7335
7336
7337
7338
7339
7340
7341
7342
7343
7344
7345
7346
7347
7348
7349
7350
7351
7352
7353
7354
7355
7356
7357
7358
7359
7360
7361
7362
7363
7364
7365
7366
7367
7368
7369
7370
7371
7372
7373
7374
7375
7376
7377
7378
7379
7380
7381
7382
7383
7384
7385
7386
7387
7388
7389
7390
7391
7392
7393
7394
7395
7396
7397
7398
7399
7400
7401
7402
7403
7404
7405
7406
7407
7408
7409
7410
7411
7412
7413
7414
7415
7416
7417
7418
7419
7420
7421
7422
7423
7424
7425
7426
7427
7428
7429
7430
7431
7432
7433
7434
7435
7436
7437
7438
7439
7440
7441
7442
7443
7444
7445
7446
7447
7448
7449
7450
7451
7452
7453
7454
7455
7456
7457
7458
7459
7460
7461
7462
7463
7464
7465
7466
7467
7468
7469
7470
7471
7472
7473
7474
7475
7476
7477
7478
7479
7480
7481
7482
7483
7484
7485
7486
7487
7488
7489
7490
7491
7492
7493
7494
7495
7496
7497
7498
7499
7500
7501
7502
7503
7504
7505
7506
7507
7508
7509
7510
7511
7512
7513
7514
7515
7516
7517
7518
7519
7520
7521
7522
7523
7524
7525
7526
7527
7528
7529
7530
7531
7532
7533
7534
7535
7536
7537
7538
7539
7540
7541
7542
7543
7544
7545
7546
7547
7548
7549
7550
7551
7552
7553
7554
7555
7556
7557
7558
7559
7560
7561
7562
7563
7564
7565
7566
7567
7568
7569
7570
7571
7572
7573
7574
7575
7576
7577
7578
7579
7580
7581
7582
7583
7584
7585
7586
7587
7588
7589
7590
7591
7592
7593
7594
7595
7596
7597
7598
7599
7600
7601
7602
7603
7604
7605
7606
7607
7608
7609
7610
7611
7612
7613
7614
7615
7616
7617
7618
7619
7620
7621
7622
7623
7624
7625
7626
7627
7628
7629
7630
7631
7632
7633
7634
7635
7636
7637
7638
7639
7640
7641
7642
7643
7644
7645
7646
7647
7648
7649
7650
7651
7652
7653
7654
7655
7656
7657
7658
7659
7660
7661
7662
7663
7664
7665
7666
7667
7668
7669
7670
7671
7672
7673
7674
7675
7676
7677
7678
7679
7680
7681
7682
7683
7684
7685
7686
7687
7688
7689
7690
7691
7692
7693
7694
7695
7696
7697
7698
7699
7700
7701
7702
7703
7704
7705
7706
7707
7708
7709
7710
7711
7712
7713
7714
7715
7716
7717
7718
7719
7720
7721
7722
7723
7724
7725
7726
7727
7728
7729
7730
7731
7732
7733
7734
7735
7736
7737
7738
7739
7740
7741
7742
7743
7744
7745
7746
7747
7748
7749
7750
7751
7752
7753
7754
7755
7756
7757
7758
7759
7760
7761
7762
7763
7764
7765
7766
7767
7768
7769
7770
7771
7772
7773
7774
7775
7776
7777
7778
7779
7780
7781
7782
7783
7784
7785
7786
7787
7788
7789
7790
7791
7792
7793
7794
7795
7796
7797
7798
7799
7800
7801
7802
7803
7804
7805
7806
7807
7808
7809
7810
7811
7812
7813
7814
7815
7816
7817
7818
7819
7820
7821
7822
7823
7824
7825
7826
7827
7828
7829
7830
7831
7832
7833
7834
7835
7836
7837
7838
7839
7840
7841
7842
7843
7844
7845
7846
7847
7848
7849
7850
7851
7852
7853
7854
7855
7856
7857
7858
7859
7860
7861
7862
7863
7864
7865
7866
7867
7868
7869
7870
7871
7872
7873
7874
7875
7876
7877
7878
7879
7880
7881
7882
7883
7884
7885
7886
7887
7888
7889
7890
7891
7892
7893
7894
7895
7896
7897
7898
7899
7900
7901
7902
7903
7904
7905
7906
7907
7908
7909
7910
7911
7912
7913
7914
7915
7916
7917
7918
7919
7920
7921
7922
7923
7924
7925
7926
7927
7928
7929
7930
7931
7932
7933
7934
7935
7936
7937
7938
7939
7940
7941
7942
7943
7944
7945
7946
7947
7948
7949
7950
7951
7952
7953
7954
7955
7956
7957
7958
7959
7960
7961
7962
7963
7964
7965
7966
7967
7968
7969
7970
7971
7972
7973
7974
7975
7976
7977
7978
7979
7980
7981
7982
7983
7984
7985
7986
7987
7988
7989
7990
7991
7992
7993
7994
7995
7996
7997
7998
7999
8000
8001
8002
8003
8004
8005
8006
8007
X_test, y_test = load_images_and_labels(test, cates)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 1900 1901 1902 1903 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025
Here we convert X_train, and X_test to an array of float values and normalize then y_train and y_test to categorical values
from keras.utils import to_categorical
def preprocess_data(X, y):
# convert X from list to array
X = np.array(X)
# convert integer values of X into floats
X = X.astype(np.float32)
# normalization
X = X/255.0
# one-hot encoding the labels
y = to_categorical(np.array(y))
return X, y
(X_train, y_train) = preprocess_data(X_train, y_train)
(X_test, y_test) = preprocess_data(X_test, y_test)
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D, Dense, Flatten, Dropout
# metric
from keras.metrics import binary_crossentropy
# optimization method (Stochastic Gradient Descent (SGD))
from keras.optimizers import SGD, Adam
from tensorflow.keras.utils import plot_model
def Alexnet():
# Initialize the model
model = Sequential()
# layer 1: convolutional layer + max-pooling layer ##see below shape size
model.add(Conv2D(filters = 96, kernel_size = (11,11), strides= 4, padding = 'valid', activation='relu', input_shape = (227,227,3)))
model.add(MaxPooling2D(pool_size = (3,3), strides = 2))
# layer 2: convolutional layer + max-pooling layer
model.add(Conv2D(filters = 256, kernel_size = (5,5), padding = 'same', activation = 'relu'))
model.add(MaxPooling2D(pool_size = (3,3), strides = 2))
# layers 3-5: three convolutional layers + 1 max-pooling layer
model.add(Conv2D(filters = 384, kernel_size = (3,3), padding = 'same', activation = 'relu'))
model.add(Conv2D(filters = 384, kernel_size = (3,3), padding = 'same', activation = 'relu'))
model.add(Conv2D(filters = 256, kernel_size = (3,3), padding = 'same', activation = 'relu'))
model.add(MaxPooling2D(pool_size = (3,3), strides = 2))
# layers 6 - 8: two fully connected hidden layers and one fully connected output layer
model.add(Flatten())
model.add(Dense(4096, activation = 'relu'))
model.add(Dropout(0.5))
model.add(Dense(4096, activation = 'relu'))
model.add(Dropout(0.5))
model.add(Dense(2, activation = 'softmax'))
# compile the model with a loss funciton, a metric and and optimization method
#opt = Adam(lr = 0.01) #0.001 - gives high loss values and lower accuracy
opt = SGD(lr = 0.1) #0.1 gives the lowest loss values
model.compile(loss = binary_crossentropy,
optimizer = opt,
metrics = ['accuracy'])
return model
Alexnet_model = Alexnet()
Alexnet_model.summary()
plot_model(Alexnet_model, to_file='AlexnetModel.png', show_shapes=True)
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_5 (Conv2D) (None, 55, 55, 96) 34944
max_pooling2d_3 (MaxPooling (None, 27, 27, 96) 0
2D)
conv2d_6 (Conv2D) (None, 27, 27, 256) 614656
max_pooling2d_4 (MaxPooling (None, 13, 13, 256) 0
2D)
conv2d_7 (Conv2D) (None, 13, 13, 384) 885120
conv2d_8 (Conv2D) (None, 13, 13, 384) 1327488
conv2d_9 (Conv2D) (None, 13, 13, 256) 884992
max_pooling2d_5 (MaxPooling (None, 6, 6, 256) 0
2D)
flatten_1 (Flatten) (None, 9216) 0
dense_3 (Dense) (None, 4096) 37752832
dropout_2 (Dropout) (None, 4096) 0
dense_4 (Dense) (None, 4096) 16781312
dropout_3 (Dropout) (None, 4096) 0
dense_5 (Dense) (None, 2) 8194
=================================================================
Total params: 58,289,538
Trainable params: 58,289,538
Non-trainable params: 0
_________________________________________________________________
Below, we applied a rotation and image flip in order to prevent overfitting
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import ModelCheckpoint
def train_model(model, X_train, y_train, X_test, y_test, epochs, batch_size):
# Data generator
datagen = ImageDataGenerator(rotation_range = 5, width_shift_range = 0.1, height_shift_range = 0.1, horizontal_flip = True)
# iteration on the training set
it_train = datagen.flow(X_train, y_train, batch_size = batch_size)
# path to save checkpoint
path_cp = os.getcwd() + '/' + 'weights_.hdf5'
checkpoint_ = ModelCheckpoint(path_cp, monitor = 'loss', save_best_only = True, mode = 'auto')
steps = X_train.shape[0]//batch_size
# Fitting the model
history = model.fit_generator(it_train, epochs = epochs, steps_per_epoch = steps,
validation_data = (X_test, y_test), verbose = 1,
callbacks = checkpoint_)
# Evaluating the model
_, acc = model.evaluate(X_test, y_test, verbose = 1)
print('%.3f' % (acc * 100.0))
return history, acc
Below we can see results from 30 epochs of our best tested model using stochastic gradient descent (SGD) and a learning rate of 0.1. Other results from previous tests can be found in the results and discussion section
train_history, acc = train_model(Alexnet_model, X_train, y_train, X_test, y_test, epochs = 30, batch_size = 128)
Epoch 1/30 62/62 [==============================] - 98s 1s/step - loss: 0.6928 - accuracy: 0.5161 - val_loss: 0.6915 - val_accuracy: 0.5002 Epoch 2/30 62/62 [==============================] - 83s 1s/step - loss: 0.6900 - accuracy: 0.5347 - val_loss: 0.6895 - val_accuracy: 0.5146 Epoch 3/30 62/62 [==============================] - 84s 1s/step - loss: 0.6895 - accuracy: 0.5422 - val_loss: 0.6925 - val_accuracy: 0.4978 Epoch 4/30 62/62 [==============================] - 85s 1s/step - loss: 0.6862 - accuracy: 0.5577 - val_loss: 0.6846 - val_accuracy: 0.5185 Epoch 5/30 62/62 [==============================] - 81s 1s/step - loss: 0.6815 - accuracy: 0.5632 - val_loss: 0.7324 - val_accuracy: 0.5091 Epoch 6/30 62/62 [==============================] - 83s 1s/step - loss: 0.6735 - accuracy: 0.5819 - val_loss: 0.6718 - val_accuracy: 0.5561 Epoch 7/30 62/62 [==============================] - 83s 1s/step - loss: 0.6749 - accuracy: 0.5845 - val_loss: 0.6713 - val_accuracy: 0.5512 Epoch 8/30 62/62 [==============================] - 82s 1s/step - loss: 0.6492 - accuracy: 0.6219 - val_loss: 0.6283 - val_accuracy: 0.6634 Epoch 9/30 62/62 [==============================] - 81s 1s/step - loss: 0.6515 - accuracy: 0.6275 - val_loss: 0.6462 - val_accuracy: 0.6357 Epoch 10/30 62/62 [==============================] - 80s 1s/step - loss: 0.6322 - accuracy: 0.6481 - val_loss: 0.6056 - val_accuracy: 0.6822 Epoch 11/30 62/62 [==============================] - 81s 1s/step - loss: 0.6277 - accuracy: 0.6524 - val_loss: 0.6247 - val_accuracy: 0.6574 Epoch 12/30 62/62 [==============================] - 80s 1s/step - loss: 0.6331 - accuracy: 0.6398 - val_loss: 0.6241 - val_accuracy: 0.6950 Epoch 13/30 62/62 [==============================] - 80s 1s/step - loss: 0.6251 - accuracy: 0.6551 - val_loss: 0.5595 - val_accuracy: 0.7261 Epoch 14/30 62/62 [==============================] - 80s 1s/step - loss: 0.5929 - accuracy: 0.6853 - val_loss: 0.5607 - val_accuracy: 0.7197 Epoch 15/30 62/62 [==============================] - 81s 1s/step - loss: 0.5781 - accuracy: 0.6957 - val_loss: 0.5298 - val_accuracy: 0.7360 Epoch 16/30 62/62 [==============================] - 80s 1s/step - loss: 0.5559 - accuracy: 0.7090 - val_loss: 0.5341 - val_accuracy: 0.7439 Epoch 17/30 62/62 [==============================] - 81s 1s/step - loss: 0.5492 - accuracy: 0.7217 - val_loss: 0.5872 - val_accuracy: 0.6817 Epoch 18/30 62/62 [==============================] - 80s 1s/step - loss: 0.5658 - accuracy: 0.7047 - val_loss: 0.5811 - val_accuracy: 0.6965 Epoch 19/30 62/62 [==============================] - 80s 1s/step - loss: 0.5095 - accuracy: 0.7483 - val_loss: 0.4387 - val_accuracy: 0.7968 Epoch 20/30 62/62 [==============================] - 80s 1s/step - loss: 0.4845 - accuracy: 0.7643 - val_loss: 0.4550 - val_accuracy: 0.7924 Epoch 21/30 62/62 [==============================] - 80s 1s/step - loss: 0.4680 - accuracy: 0.7739 - val_loss: 0.5240 - val_accuracy: 0.7385 Epoch 22/30 62/62 [==============================] - 79s 1s/step - loss: 0.4615 - accuracy: 0.7825 - val_loss: 0.3917 - val_accuracy: 0.8230 Epoch 23/30 62/62 [==============================] - 79s 1s/step - loss: 0.4359 - accuracy: 0.7940 - val_loss: 0.4283 - val_accuracy: 0.8072 Epoch 24/30 62/62 [==============================] - 80s 1s/step - loss: 0.4107 - accuracy: 0.8113 - val_loss: 0.3789 - val_accuracy: 0.8398 Epoch 25/30 62/62 [==============================] - 79s 1s/step - loss: 0.4197 - accuracy: 0.8058 - val_loss: 0.3575 - val_accuracy: 0.8433 Epoch 26/30 62/62 [==============================] - 80s 1s/step - loss: 0.3993 - accuracy: 0.8160 - val_loss: 0.3631 - val_accuracy: 0.8379 Epoch 27/30 62/62 [==============================] - 80s 1s/step - loss: 0.3911 - accuracy: 0.8214 - val_loss: 0.3652 - val_accuracy: 0.8458 Epoch 28/30 62/62 [==============================] - 80s 1s/step - loss: 0.3750 - accuracy: 0.8323 - val_loss: 0.3269 - val_accuracy: 0.8606 Epoch 29/30 62/62 [==============================] - 80s 1s/step - loss: 0.3549 - accuracy: 0.8408 - val_loss: 0.3406 - val_accuracy: 0.8537 Epoch 30/30 62/62 [==============================] - 79s 1s/step - loss: 0.3866 - accuracy: 0.8157 - val_loss: 0.3413 - val_accuracy: 0.8473 64/64 [==============================] - 1s 19ms/step - loss: 0.3413 - accuracy: 0.8473 84.726
Saving results as CSV
Results_SGD1 = {'train_loss': train_history.history['loss'],
'train_accuracy': train_history.history['accuracy'],
'val_loss': train_history.history['val_loss'],
'val_accuracy': train_history.history['val_accuracy']}
results_df = pd.DataFrame(Results_SGD1)
results_df.to_csv('ResultsAlexSGD1.csv', index=1)
'''
Results_SGD2 = {'train_loss': train_history.history['loss'],
'train_accuracy': train_history.history['accuracy'],
'val_loss': train_history.history['val_loss'],
'val_accuracy': train_history.history['val_accuracy']}
results_df = pd.DataFrame(Results_SGD2)
results_df.to_csv('ResultsAlexSGD2.csv', index=1)
Results_Adam = {'train_loss': train_history.history['loss'],
'train_accuracy': train_history.history['accuracy'],
'val_loss': train_history.history['val_loss'],
'val_accuracy': train_history.history['val_accuracy']}
results_df = pd.DataFrame(Results_Adam)
results_df.to_csv('ResultsAdam.csv', index=1)
'''
Accuracy Plot
def plot_accuracy(history):
plt.figure(figsize = (10,6))
plt.plot(history.history['accuracy'], color = 'blue', label = 'train')
plt.plot(history.history['val_accuracy'], color = 'orange', label = 'val')
plt.legend()
plt.title('AlexNet Accuracy')
plt.show()
plot_accuracy(train_history)
Loss Plot
def plot_loss(history):
plt.figure(figsize = (10,6))
plt.plot(history.history['loss'], color = 'blue', label = 'train')
plt.plot(history.history['val_loss'], color = 'orange', label = 'val')
plt.legend()
plt.title('AlexNet Loss')
plt.show()
plot_loss(train_history)
Transfer learning is a machine learning method where a pre-trained model is used as a starting point for developing a model on a different set of data. This is a useful technique because pre-trained models can be developed using massive datasets which requires a lot of resources. Then the pre-trained model can be integrated into the process of training a new model on a smaller dataset with less resources needed.
To accomplish this the pre-trained base model is frozen which means that it will not be trained on the new dataset. Fine tuning involves changing the top layers to fit the new dataset and classification labels.
EfficientNet is developed around the idea of using a compound coefficient to scale a convolutional neural network. Conventional approaches arbitrarily scale network dimensions (such as width, depth, and resolution). This method uniformly scales each dimension with a fixed set of scaling coefficients. The creators of this network found that the optimal formula for scaling multiple dimensions together was depth = 1.20, width = 1.10, and resolution = 1.15. This means when a CNN is scaled up the depth of layers will increase 20%, width will increase 10% and image resolution will increase 15%. This compound scaling method was used to scale from the base model EfficientNetB0 to the most complex model EfficientNetB7.
Model Input Features - Images: The input for the models is images that have been rescaled and image augmentation was performed.
Model Target - Classification: Cat or Dog Label
Seven EfficientNet models were created:
The EfficientNet models are using the Binary Cross Entropy loss function.
$$\mathcal{L}_{\text{BCE}}(y, \hat{y}) = -\frac{1}{N}\sum_{i=1}^{N} y_i \log(\hat{y_i}) + (1 - y_i) \log(1 - \hat{y_i})$$Model Table:
Table of EfficientNet Base Models and the required image input resolution.
Workflow Diagram:
Diagram of EfficientNet Model Workflow: Images were resized to the appropriate size for each EfficientNet model. The weights of a pre-trained EfficientNet model were used (trained on ImageNet). The diagram desplays the architecture of EfficientNetB0. A top layer was added to integrate the pretrained model to our dataset and classification labels.
!pip install efficientnet
Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/
Collecting efficientnet
Downloading efficientnet-1.1.1-py3-none-any.whl (18 kB)
Requirement already satisfied: scikit-image in /usr/local/lib/python3.9/dist-packages (from efficientnet) (0.19.3)
Collecting keras-applications<=1.0.8,>=1.0.7
Downloading Keras_Applications-1.0.8-py3-none-any.whl (50 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 50.7/50.7 kB 1.6 MB/s eta 0:00:00
Requirement already satisfied: numpy>=1.9.1 in /usr/local/lib/python3.9/dist-packages (from keras-applications<=1.0.8,>=1.0.7->efficientnet) (1.22.4)
Requirement already satisfied: h5py in /usr/local/lib/python3.9/dist-packages (from keras-applications<=1.0.8,>=1.0.7->efficientnet) (3.8.0)
Requirement already satisfied: tifffile>=2019.7.26 in /usr/local/lib/python3.9/dist-packages (from scikit-image->efficientnet) (2023.4.12)
Requirement already satisfied: networkx>=2.2 in /usr/local/lib/python3.9/dist-packages (from scikit-image->efficientnet) (3.1)
Requirement already satisfied: pillow!=7.1.0,!=7.1.1,!=8.3.0,>=6.1.0 in /usr/local/lib/python3.9/dist-packages (from scikit-image->efficientnet) (8.4.0)
Requirement already satisfied: imageio>=2.4.1 in /usr/local/lib/python3.9/dist-packages (from scikit-image->efficientnet) (2.25.1)
Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.9/dist-packages (from scikit-image->efficientnet) (23.1)
Requirement already satisfied: scipy>=1.4.1 in /usr/local/lib/python3.9/dist-packages (from scikit-image->efficientnet) (1.10.1)
Requirement already satisfied: PyWavelets>=1.1.1 in /usr/local/lib/python3.9/dist-packages (from scikit-image->efficientnet) (1.4.1)
Installing collected packages: keras-applications, efficientnet
Successfully installed efficientnet-1.1.1 keras-applications-1.0.8
import tensorflow as tf
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras import layers
from tensorflow.keras import Model
import matplotlib.pyplot as plt
import efficientnet.keras as efn
from tensorflow.keras.optimizers import RMSprop
from tensorflow.python.keras.layers import Dense, Flatten
import time
import os
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
from google.colab import auth
from google.auth import default
from google.colab import drive
import gspread
auth.authenticate_user()
creds, _ = default()
gc = gspread.authorize(creds)
drive.mount('/content/drive')
Mounted at /content/drive
base_dir = '/content/drive/MyDrive/Data_Science_Final/aml'
train_dir = os.path.join(base_dir, 'training_set')
test_dir = os.path.join(base_dir, 'test_set')
# Directory with training cat pictures
train_cats_dir = os.path.join(train_dir, 'cats')
# Directory with training dog pictures
train_dogs_dir = os.path.join(train_dir, 'dogs')
# Directory with test cat pictures
test_cats_dir = os.path.join(test_dir, 'cats')
# Directory with test dog pictures
test_dogs_dir = os.path.join(test_dir, 'dogs')
#Display cat and dog images
# Set up matplotlib fig, and size it to fit 4x4 pics
nrows = 4
ncols = 4
fig = plt.gcf()
fig.set_size_inches(ncols*4, nrows*4)
pic_index = 100
train_cat_fnames = os.listdir(train_cats_dir)
train_dog_fnames = os.listdir(train_dogs_dir)
next_cat_pix = [os.path.join(train_cats_dir, fname)
for fname in train_cat_fnames[ pic_index-8:pic_index]
]
next_dog_pix = [os.path.join(train_dogs_dir, fname)
for fname in train_dog_fnames[ pic_index-8:pic_index]
]
for i, img_path in enumerate(next_cat_pix+next_dog_pix):
# Set up subplot; subplot indices start at 1
sp = plt.subplot(nrows, ncols, i + 1)
sp.axis('Off') # Don't show axes (or gridlines)
img = mpimg.imread(img_path)
plt.imshow(img)
plt.show()
# Add rescaling and augmentation to ImageDataGenerator for the training set
train_datagen = ImageDataGenerator(rescale = 1./255., rotation_range = 40, width_shift_range = 0.2, height_shift_range = 0.2, shear_range = 0.2, zoom_range = 0.2, horizontal_flip = True, validation_split=0.1) # set validation split
# Rescale validation set. No augmentation on the validation set.
validation_datagen = ImageDataGenerator(rescale = 1./255.,validation_split=0.1) # set validation split
#Read images directly from directory.
train_generator = train_datagen.flow_from_directory(train_dir, seed = 42, shuffle = True, batch_size = 20, class_mode = 'binary', target_size = (224, 224), subset='training') #set as training data
validation_generator = validation_datagen.flow_from_directory(train_dir, seed = 42, shuffle = True, batch_size = 20, class_mode = 'binary', target_size = (224, 224), subset='validation') # same directory as training data. Set as validation data
Found 7205 images belonging to 2 classes. Found 800 images belonging to 2 classes.
#Instantiates the EfficientNet architecture
base_model = efn.EfficientNetB0(input_shape = (224, 224, 3), include_top = False, weights = 'imagenet')
Downloading data from https://github.com/Callidior/keras-applications/releases/download/efficientnet/efficientnet-b0_weights_tf_dim_ordering_tf_kernels_autoaugment_notop.h5 16804768/16804768 [==============================] - 2s 0us/step
# Set trainable attribute to false for all of the base model layers
for layer in base_model.layers:
layer.trainable = False
#Build on top of existing base model.
x = base_model.output
x = layers.Flatten()(x) #convert to 1D array
x = layers.Dense(1024, activation="relu")(x) #fully connected layer with 1,024 hidden units and ReLU activation
x = layers.Dropout(0.5)(x) #Drops 50% of inputs to zero at each training iteration (prevents overfitting)
# Add a final sigmoid layer with 1 node for classification output (probability between 0 and 1)
predictions = layers.Dense(1, activation="sigmoid")(x)
model_final = Model(inputs = base_model.input, outputs = predictions)
#Print model summary
from torchsummary import summary
model_sum = model_final.summary()
Model: "model"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) [(None, 224, 224, 3 0 []
)]
stem_conv (Conv2D) (None, 112, 112, 32 864 ['input_1[0][0]']
)
stem_bn (BatchNormalization) (None, 112, 112, 32 128 ['stem_conv[0][0]']
)
stem_activation (Activation) (None, 112, 112, 32 0 ['stem_bn[0][0]']
)
block1a_dwconv (DepthwiseConv2 (None, 112, 112, 32 288 ['stem_activation[0][0]']
D) )
block1a_bn (BatchNormalization (None, 112, 112, 32 128 ['block1a_dwconv[0][0]']
) )
block1a_activation (Activation (None, 112, 112, 32 0 ['block1a_bn[0][0]']
) )
block1a_se_squeeze (GlobalAver (None, 32) 0 ['block1a_activation[0][0]']
agePooling2D)
block1a_se_reshape (Reshape) (None, 1, 1, 32) 0 ['block1a_se_squeeze[0][0]']
block1a_se_reduce (Conv2D) (None, 1, 1, 8) 264 ['block1a_se_reshape[0][0]']
block1a_se_expand (Conv2D) (None, 1, 1, 32) 288 ['block1a_se_reduce[0][0]']
block1a_se_excite (Multiply) (None, 112, 112, 32 0 ['block1a_activation[0][0]',
) 'block1a_se_expand[0][0]']
block1a_project_conv (Conv2D) (None, 112, 112, 16 512 ['block1a_se_excite[0][0]']
)
block1a_project_bn (BatchNorma (None, 112, 112, 16 64 ['block1a_project_conv[0][0]']
lization) )
block2a_expand_conv (Conv2D) (None, 112, 112, 96 1536 ['block1a_project_bn[0][0]']
)
block2a_expand_bn (BatchNormal (None, 112, 112, 96 384 ['block2a_expand_conv[0][0]']
ization) )
block2a_expand_activation (Act (None, 112, 112, 96 0 ['block2a_expand_bn[0][0]']
ivation) )
block2a_dwconv (DepthwiseConv2 (None, 56, 56, 96) 864 ['block2a_expand_activation[0][0]
D) ']
block2a_bn (BatchNormalization (None, 56, 56, 96) 384 ['block2a_dwconv[0][0]']
)
block2a_activation (Activation (None, 56, 56, 96) 0 ['block2a_bn[0][0]']
)
block2a_se_squeeze (GlobalAver (None, 96) 0 ['block2a_activation[0][0]']
agePooling2D)
block2a_se_reshape (Reshape) (None, 1, 1, 96) 0 ['block2a_se_squeeze[0][0]']
block2a_se_reduce (Conv2D) (None, 1, 1, 4) 388 ['block2a_se_reshape[0][0]']
block2a_se_expand (Conv2D) (None, 1, 1, 96) 480 ['block2a_se_reduce[0][0]']
block2a_se_excite (Multiply) (None, 56, 56, 96) 0 ['block2a_activation[0][0]',
'block2a_se_expand[0][0]']
block2a_project_conv (Conv2D) (None, 56, 56, 24) 2304 ['block2a_se_excite[0][0]']
block2a_project_bn (BatchNorma (None, 56, 56, 24) 96 ['block2a_project_conv[0][0]']
lization)
block2b_expand_conv (Conv2D) (None, 56, 56, 144) 3456 ['block2a_project_bn[0][0]']
block2b_expand_bn (BatchNormal (None, 56, 56, 144) 576 ['block2b_expand_conv[0][0]']
ization)
block2b_expand_activation (Act (None, 56, 56, 144) 0 ['block2b_expand_bn[0][0]']
ivation)
block2b_dwconv (DepthwiseConv2 (None, 56, 56, 144) 1296 ['block2b_expand_activation[0][0]
D) ']
block2b_bn (BatchNormalization (None, 56, 56, 144) 576 ['block2b_dwconv[0][0]']
)
block2b_activation (Activation (None, 56, 56, 144) 0 ['block2b_bn[0][0]']
)
block2b_se_squeeze (GlobalAver (None, 144) 0 ['block2b_activation[0][0]']
agePooling2D)
block2b_se_reshape (Reshape) (None, 1, 1, 144) 0 ['block2b_se_squeeze[0][0]']
block2b_se_reduce (Conv2D) (None, 1, 1, 6) 870 ['block2b_se_reshape[0][0]']
block2b_se_expand (Conv2D) (None, 1, 1, 144) 1008 ['block2b_se_reduce[0][0]']
block2b_se_excite (Multiply) (None, 56, 56, 144) 0 ['block2b_activation[0][0]',
'block2b_se_expand[0][0]']
block2b_project_conv (Conv2D) (None, 56, 56, 24) 3456 ['block2b_se_excite[0][0]']
block2b_project_bn (BatchNorma (None, 56, 56, 24) 96 ['block2b_project_conv[0][0]']
lization)
block2b_drop (FixedDropout) (None, 56, 56, 24) 0 ['block2b_project_bn[0][0]']
block2b_add (Add) (None, 56, 56, 24) 0 ['block2b_drop[0][0]',
'block2a_project_bn[0][0]']
block3a_expand_conv (Conv2D) (None, 56, 56, 144) 3456 ['block2b_add[0][0]']
block3a_expand_bn (BatchNormal (None, 56, 56, 144) 576 ['block3a_expand_conv[0][0]']
ization)
block3a_expand_activation (Act (None, 56, 56, 144) 0 ['block3a_expand_bn[0][0]']
ivation)
block3a_dwconv (DepthwiseConv2 (None, 28, 28, 144) 3600 ['block3a_expand_activation[0][0]
D) ']
block3a_bn (BatchNormalization (None, 28, 28, 144) 576 ['block3a_dwconv[0][0]']
)
block3a_activation (Activation (None, 28, 28, 144) 0 ['block3a_bn[0][0]']
)
block3a_se_squeeze (GlobalAver (None, 144) 0 ['block3a_activation[0][0]']
agePooling2D)
block3a_se_reshape (Reshape) (None, 1, 1, 144) 0 ['block3a_se_squeeze[0][0]']
block3a_se_reduce (Conv2D) (None, 1, 1, 6) 870 ['block3a_se_reshape[0][0]']
block3a_se_expand (Conv2D) (None, 1, 1, 144) 1008 ['block3a_se_reduce[0][0]']
block3a_se_excite (Multiply) (None, 28, 28, 144) 0 ['block3a_activation[0][0]',
'block3a_se_expand[0][0]']
block3a_project_conv (Conv2D) (None, 28, 28, 40) 5760 ['block3a_se_excite[0][0]']
block3a_project_bn (BatchNorma (None, 28, 28, 40) 160 ['block3a_project_conv[0][0]']
lization)
block3b_expand_conv (Conv2D) (None, 28, 28, 240) 9600 ['block3a_project_bn[0][0]']
block3b_expand_bn (BatchNormal (None, 28, 28, 240) 960 ['block3b_expand_conv[0][0]']
ization)
block3b_expand_activation (Act (None, 28, 28, 240) 0 ['block3b_expand_bn[0][0]']
ivation)
block3b_dwconv (DepthwiseConv2 (None, 28, 28, 240) 6000 ['block3b_expand_activation[0][0]
D) ']
block3b_bn (BatchNormalization (None, 28, 28, 240) 960 ['block3b_dwconv[0][0]']
)
block3b_activation (Activation (None, 28, 28, 240) 0 ['block3b_bn[0][0]']
)
block3b_se_squeeze (GlobalAver (None, 240) 0 ['block3b_activation[0][0]']
agePooling2D)
block3b_se_reshape (Reshape) (None, 1, 1, 240) 0 ['block3b_se_squeeze[0][0]']
block3b_se_reduce (Conv2D) (None, 1, 1, 10) 2410 ['block3b_se_reshape[0][0]']
block3b_se_expand (Conv2D) (None, 1, 1, 240) 2640 ['block3b_se_reduce[0][0]']
block3b_se_excite (Multiply) (None, 28, 28, 240) 0 ['block3b_activation[0][0]',
'block3b_se_expand[0][0]']
block3b_project_conv (Conv2D) (None, 28, 28, 40) 9600 ['block3b_se_excite[0][0]']
block3b_project_bn (BatchNorma (None, 28, 28, 40) 160 ['block3b_project_conv[0][0]']
lization)
block3b_drop (FixedDropout) (None, 28, 28, 40) 0 ['block3b_project_bn[0][0]']
block3b_add (Add) (None, 28, 28, 40) 0 ['block3b_drop[0][0]',
'block3a_project_bn[0][0]']
block4a_expand_conv (Conv2D) (None, 28, 28, 240) 9600 ['block3b_add[0][0]']
block4a_expand_bn (BatchNormal (None, 28, 28, 240) 960 ['block4a_expand_conv[0][0]']
ization)
block4a_expand_activation (Act (None, 28, 28, 240) 0 ['block4a_expand_bn[0][0]']
ivation)
block4a_dwconv (DepthwiseConv2 (None, 14, 14, 240) 2160 ['block4a_expand_activation[0][0]
D) ']
block4a_bn (BatchNormalization (None, 14, 14, 240) 960 ['block4a_dwconv[0][0]']
)
block4a_activation (Activation (None, 14, 14, 240) 0 ['block4a_bn[0][0]']
)
block4a_se_squeeze (GlobalAver (None, 240) 0 ['block4a_activation[0][0]']
agePooling2D)
block4a_se_reshape (Reshape) (None, 1, 1, 240) 0 ['block4a_se_squeeze[0][0]']
block4a_se_reduce (Conv2D) (None, 1, 1, 10) 2410 ['block4a_se_reshape[0][0]']
block4a_se_expand (Conv2D) (None, 1, 1, 240) 2640 ['block4a_se_reduce[0][0]']
block4a_se_excite (Multiply) (None, 14, 14, 240) 0 ['block4a_activation[0][0]',
'block4a_se_expand[0][0]']
block4a_project_conv (Conv2D) (None, 14, 14, 80) 19200 ['block4a_se_excite[0][0]']
block4a_project_bn (BatchNorma (None, 14, 14, 80) 320 ['block4a_project_conv[0][0]']
lization)
block4b_expand_conv (Conv2D) (None, 14, 14, 480) 38400 ['block4a_project_bn[0][0]']
block4b_expand_bn (BatchNormal (None, 14, 14, 480) 1920 ['block4b_expand_conv[0][0]']
ization)
block4b_expand_activation (Act (None, 14, 14, 480) 0 ['block4b_expand_bn[0][0]']
ivation)
block4b_dwconv (DepthwiseConv2 (None, 14, 14, 480) 4320 ['block4b_expand_activation[0][0]
D) ']
block4b_bn (BatchNormalization (None, 14, 14, 480) 1920 ['block4b_dwconv[0][0]']
)
block4b_activation (Activation (None, 14, 14, 480) 0 ['block4b_bn[0][0]']
)
block4b_se_squeeze (GlobalAver (None, 480) 0 ['block4b_activation[0][0]']
agePooling2D)
block4b_se_reshape (Reshape) (None, 1, 1, 480) 0 ['block4b_se_squeeze[0][0]']
block4b_se_reduce (Conv2D) (None, 1, 1, 20) 9620 ['block4b_se_reshape[0][0]']
block4b_se_expand (Conv2D) (None, 1, 1, 480) 10080 ['block4b_se_reduce[0][0]']
block4b_se_excite (Multiply) (None, 14, 14, 480) 0 ['block4b_activation[0][0]',
'block4b_se_expand[0][0]']
block4b_project_conv (Conv2D) (None, 14, 14, 80) 38400 ['block4b_se_excite[0][0]']
block4b_project_bn (BatchNorma (None, 14, 14, 80) 320 ['block4b_project_conv[0][0]']
lization)
block4b_drop (FixedDropout) (None, 14, 14, 80) 0 ['block4b_project_bn[0][0]']
block4b_add (Add) (None, 14, 14, 80) 0 ['block4b_drop[0][0]',
'block4a_project_bn[0][0]']
block4c_expand_conv (Conv2D) (None, 14, 14, 480) 38400 ['block4b_add[0][0]']
block4c_expand_bn (BatchNormal (None, 14, 14, 480) 1920 ['block4c_expand_conv[0][0]']
ization)
block4c_expand_activation (Act (None, 14, 14, 480) 0 ['block4c_expand_bn[0][0]']
ivation)
block4c_dwconv (DepthwiseConv2 (None, 14, 14, 480) 4320 ['block4c_expand_activation[0][0]
D) ']
block4c_bn (BatchNormalization (None, 14, 14, 480) 1920 ['block4c_dwconv[0][0]']
)
block4c_activation (Activation (None, 14, 14, 480) 0 ['block4c_bn[0][0]']
)
block4c_se_squeeze (GlobalAver (None, 480) 0 ['block4c_activation[0][0]']
agePooling2D)
block4c_se_reshape (Reshape) (None, 1, 1, 480) 0 ['block4c_se_squeeze[0][0]']
block4c_se_reduce (Conv2D) (None, 1, 1, 20) 9620 ['block4c_se_reshape[0][0]']
block4c_se_expand (Conv2D) (None, 1, 1, 480) 10080 ['block4c_se_reduce[0][0]']
block4c_se_excite (Multiply) (None, 14, 14, 480) 0 ['block4c_activation[0][0]',
'block4c_se_expand[0][0]']
block4c_project_conv (Conv2D) (None, 14, 14, 80) 38400 ['block4c_se_excite[0][0]']
block4c_project_bn (BatchNorma (None, 14, 14, 80) 320 ['block4c_project_conv[0][0]']
lization)
block4c_drop (FixedDropout) (None, 14, 14, 80) 0 ['block4c_project_bn[0][0]']
block4c_add (Add) (None, 14, 14, 80) 0 ['block4c_drop[0][0]',
'block4b_add[0][0]']
block5a_expand_conv (Conv2D) (None, 14, 14, 480) 38400 ['block4c_add[0][0]']
block5a_expand_bn (BatchNormal (None, 14, 14, 480) 1920 ['block5a_expand_conv[0][0]']
ization)
block5a_expand_activation (Act (None, 14, 14, 480) 0 ['block5a_expand_bn[0][0]']
ivation)
block5a_dwconv (DepthwiseConv2 (None, 14, 14, 480) 12000 ['block5a_expand_activation[0][0]
D) ']
block5a_bn (BatchNormalization (None, 14, 14, 480) 1920 ['block5a_dwconv[0][0]']
)
block5a_activation (Activation (None, 14, 14, 480) 0 ['block5a_bn[0][0]']
)
block5a_se_squeeze (GlobalAver (None, 480) 0 ['block5a_activation[0][0]']
agePooling2D)
block5a_se_reshape (Reshape) (None, 1, 1, 480) 0 ['block5a_se_squeeze[0][0]']
block5a_se_reduce (Conv2D) (None, 1, 1, 20) 9620 ['block5a_se_reshape[0][0]']
block5a_se_expand (Conv2D) (None, 1, 1, 480) 10080 ['block5a_se_reduce[0][0]']
block5a_se_excite (Multiply) (None, 14, 14, 480) 0 ['block5a_activation[0][0]',
'block5a_se_expand[0][0]']
block5a_project_conv (Conv2D) (None, 14, 14, 112) 53760 ['block5a_se_excite[0][0]']
block5a_project_bn (BatchNorma (None, 14, 14, 112) 448 ['block5a_project_conv[0][0]']
lization)
block5b_expand_conv (Conv2D) (None, 14, 14, 672) 75264 ['block5a_project_bn[0][0]']
block5b_expand_bn (BatchNormal (None, 14, 14, 672) 2688 ['block5b_expand_conv[0][0]']
ization)
block5b_expand_activation (Act (None, 14, 14, 672) 0 ['block5b_expand_bn[0][0]']
ivation)
block5b_dwconv (DepthwiseConv2 (None, 14, 14, 672) 16800 ['block5b_expand_activation[0][0]
D) ']
block5b_bn (BatchNormalization (None, 14, 14, 672) 2688 ['block5b_dwconv[0][0]']
)
block5b_activation (Activation (None, 14, 14, 672) 0 ['block5b_bn[0][0]']
)
block5b_se_squeeze (GlobalAver (None, 672) 0 ['block5b_activation[0][0]']
agePooling2D)
block5b_se_reshape (Reshape) (None, 1, 1, 672) 0 ['block5b_se_squeeze[0][0]']
block5b_se_reduce (Conv2D) (None, 1, 1, 28) 18844 ['block5b_se_reshape[0][0]']
block5b_se_expand (Conv2D) (None, 1, 1, 672) 19488 ['block5b_se_reduce[0][0]']
block5b_se_excite (Multiply) (None, 14, 14, 672) 0 ['block5b_activation[0][0]',
'block5b_se_expand[0][0]']
block5b_project_conv (Conv2D) (None, 14, 14, 112) 75264 ['block5b_se_excite[0][0]']
block5b_project_bn (BatchNorma (None, 14, 14, 112) 448 ['block5b_project_conv[0][0]']
lization)
block5b_drop (FixedDropout) (None, 14, 14, 112) 0 ['block5b_project_bn[0][0]']
block5b_add (Add) (None, 14, 14, 112) 0 ['block5b_drop[0][0]',
'block5a_project_bn[0][0]']
block5c_expand_conv (Conv2D) (None, 14, 14, 672) 75264 ['block5b_add[0][0]']
block5c_expand_bn (BatchNormal (None, 14, 14, 672) 2688 ['block5c_expand_conv[0][0]']
ization)
block5c_expand_activation (Act (None, 14, 14, 672) 0 ['block5c_expand_bn[0][0]']
ivation)
block5c_dwconv (DepthwiseConv2 (None, 14, 14, 672) 16800 ['block5c_expand_activation[0][0]
D) ']
block5c_bn (BatchNormalization (None, 14, 14, 672) 2688 ['block5c_dwconv[0][0]']
)
block5c_activation (Activation (None, 14, 14, 672) 0 ['block5c_bn[0][0]']
)
block5c_se_squeeze (GlobalAver (None, 672) 0 ['block5c_activation[0][0]']
agePooling2D)
block5c_se_reshape (Reshape) (None, 1, 1, 672) 0 ['block5c_se_squeeze[0][0]']
block5c_se_reduce (Conv2D) (None, 1, 1, 28) 18844 ['block5c_se_reshape[0][0]']
block5c_se_expand (Conv2D) (None, 1, 1, 672) 19488 ['block5c_se_reduce[0][0]']
block5c_se_excite (Multiply) (None, 14, 14, 672) 0 ['block5c_activation[0][0]',
'block5c_se_expand[0][0]']
block5c_project_conv (Conv2D) (None, 14, 14, 112) 75264 ['block5c_se_excite[0][0]']
block5c_project_bn (BatchNorma (None, 14, 14, 112) 448 ['block5c_project_conv[0][0]']
lization)
block5c_drop (FixedDropout) (None, 14, 14, 112) 0 ['block5c_project_bn[0][0]']
block5c_add (Add) (None, 14, 14, 112) 0 ['block5c_drop[0][0]',
'block5b_add[0][0]']
block6a_expand_conv (Conv2D) (None, 14, 14, 672) 75264 ['block5c_add[0][0]']
block6a_expand_bn (BatchNormal (None, 14, 14, 672) 2688 ['block6a_expand_conv[0][0]']
ization)
block6a_expand_activation (Act (None, 14, 14, 672) 0 ['block6a_expand_bn[0][0]']
ivation)
block6a_dwconv (DepthwiseConv2 (None, 7, 7, 672) 16800 ['block6a_expand_activation[0][0]
D) ']
block6a_bn (BatchNormalization (None, 7, 7, 672) 2688 ['block6a_dwconv[0][0]']
)
block6a_activation (Activation (None, 7, 7, 672) 0 ['block6a_bn[0][0]']
)
block6a_se_squeeze (GlobalAver (None, 672) 0 ['block6a_activation[0][0]']
agePooling2D)
block6a_se_reshape (Reshape) (None, 1, 1, 672) 0 ['block6a_se_squeeze[0][0]']
block6a_se_reduce (Conv2D) (None, 1, 1, 28) 18844 ['block6a_se_reshape[0][0]']
block6a_se_expand (Conv2D) (None, 1, 1, 672) 19488 ['block6a_se_reduce[0][0]']
block6a_se_excite (Multiply) (None, 7, 7, 672) 0 ['block6a_activation[0][0]',
'block6a_se_expand[0][0]']
block6a_project_conv (Conv2D) (None, 7, 7, 192) 129024 ['block6a_se_excite[0][0]']
block6a_project_bn (BatchNorma (None, 7, 7, 192) 768 ['block6a_project_conv[0][0]']
lization)
block6b_expand_conv (Conv2D) (None, 7, 7, 1152) 221184 ['block6a_project_bn[0][0]']
block6b_expand_bn (BatchNormal (None, 7, 7, 1152) 4608 ['block6b_expand_conv[0][0]']
ization)
block6b_expand_activation (Act (None, 7, 7, 1152) 0 ['block6b_expand_bn[0][0]']
ivation)
block6b_dwconv (DepthwiseConv2 (None, 7, 7, 1152) 28800 ['block6b_expand_activation[0][0]
D) ']
block6b_bn (BatchNormalization (None, 7, 7, 1152) 4608 ['block6b_dwconv[0][0]']
)
block6b_activation (Activation (None, 7, 7, 1152) 0 ['block6b_bn[0][0]']
)
block6b_se_squeeze (GlobalAver (None, 1152) 0 ['block6b_activation[0][0]']
agePooling2D)
block6b_se_reshape (Reshape) (None, 1, 1, 1152) 0 ['block6b_se_squeeze[0][0]']
block6b_se_reduce (Conv2D) (None, 1, 1, 48) 55344 ['block6b_se_reshape[0][0]']
block6b_se_expand (Conv2D) (None, 1, 1, 1152) 56448 ['block6b_se_reduce[0][0]']
block6b_se_excite (Multiply) (None, 7, 7, 1152) 0 ['block6b_activation[0][0]',
'block6b_se_expand[0][0]']
block6b_project_conv (Conv2D) (None, 7, 7, 192) 221184 ['block6b_se_excite[0][0]']
block6b_project_bn (BatchNorma (None, 7, 7, 192) 768 ['block6b_project_conv[0][0]']
lization)
block6b_drop (FixedDropout) (None, 7, 7, 192) 0 ['block6b_project_bn[0][0]']
block6b_add (Add) (None, 7, 7, 192) 0 ['block6b_drop[0][0]',
'block6a_project_bn[0][0]']
block6c_expand_conv (Conv2D) (None, 7, 7, 1152) 221184 ['block6b_add[0][0]']
block6c_expand_bn (BatchNormal (None, 7, 7, 1152) 4608 ['block6c_expand_conv[0][0]']
ization)
block6c_expand_activation (Act (None, 7, 7, 1152) 0 ['block6c_expand_bn[0][0]']
ivation)
block6c_dwconv (DepthwiseConv2 (None, 7, 7, 1152) 28800 ['block6c_expand_activation[0][0]
D) ']
block6c_bn (BatchNormalization (None, 7, 7, 1152) 4608 ['block6c_dwconv[0][0]']
)
block6c_activation (Activation (None, 7, 7, 1152) 0 ['block6c_bn[0][0]']
)
block6c_se_squeeze (GlobalAver (None, 1152) 0 ['block6c_activation[0][0]']
agePooling2D)
block6c_se_reshape (Reshape) (None, 1, 1, 1152) 0 ['block6c_se_squeeze[0][0]']
block6c_se_reduce (Conv2D) (None, 1, 1, 48) 55344 ['block6c_se_reshape[0][0]']
block6c_se_expand (Conv2D) (None, 1, 1, 1152) 56448 ['block6c_se_reduce[0][0]']
block6c_se_excite (Multiply) (None, 7, 7, 1152) 0 ['block6c_activation[0][0]',
'block6c_se_expand[0][0]']
block6c_project_conv (Conv2D) (None, 7, 7, 192) 221184 ['block6c_se_excite[0][0]']
block6c_project_bn (BatchNorma (None, 7, 7, 192) 768 ['block6c_project_conv[0][0]']
lization)
block6c_drop (FixedDropout) (None, 7, 7, 192) 0 ['block6c_project_bn[0][0]']
block6c_add (Add) (None, 7, 7, 192) 0 ['block6c_drop[0][0]',
'block6b_add[0][0]']
block6d_expand_conv (Conv2D) (None, 7, 7, 1152) 221184 ['block6c_add[0][0]']
block6d_expand_bn (BatchNormal (None, 7, 7, 1152) 4608 ['block6d_expand_conv[0][0]']
ization)
block6d_expand_activation (Act (None, 7, 7, 1152) 0 ['block6d_expand_bn[0][0]']
ivation)
block6d_dwconv (DepthwiseConv2 (None, 7, 7, 1152) 28800 ['block6d_expand_activation[0][0]
D) ']
block6d_bn (BatchNormalization (None, 7, 7, 1152) 4608 ['block6d_dwconv[0][0]']
)
block6d_activation (Activation (None, 7, 7, 1152) 0 ['block6d_bn[0][0]']
)
block6d_se_squeeze (GlobalAver (None, 1152) 0 ['block6d_activation[0][0]']
agePooling2D)
block6d_se_reshape (Reshape) (None, 1, 1, 1152) 0 ['block6d_se_squeeze[0][0]']
block6d_se_reduce (Conv2D) (None, 1, 1, 48) 55344 ['block6d_se_reshape[0][0]']
block6d_se_expand (Conv2D) (None, 1, 1, 1152) 56448 ['block6d_se_reduce[0][0]']
block6d_se_excite (Multiply) (None, 7, 7, 1152) 0 ['block6d_activation[0][0]',
'block6d_se_expand[0][0]']
block6d_project_conv (Conv2D) (None, 7, 7, 192) 221184 ['block6d_se_excite[0][0]']
block6d_project_bn (BatchNorma (None, 7, 7, 192) 768 ['block6d_project_conv[0][0]']
lization)
block6d_drop (FixedDropout) (None, 7, 7, 192) 0 ['block6d_project_bn[0][0]']
block6d_add (Add) (None, 7, 7, 192) 0 ['block6d_drop[0][0]',
'block6c_add[0][0]']
block7a_expand_conv (Conv2D) (None, 7, 7, 1152) 221184 ['block6d_add[0][0]']
block7a_expand_bn (BatchNormal (None, 7, 7, 1152) 4608 ['block7a_expand_conv[0][0]']
ization)
block7a_expand_activation (Act (None, 7, 7, 1152) 0 ['block7a_expand_bn[0][0]']
ivation)
block7a_dwconv (DepthwiseConv2 (None, 7, 7, 1152) 10368 ['block7a_expand_activation[0][0]
D) ']
block7a_bn (BatchNormalization (None, 7, 7, 1152) 4608 ['block7a_dwconv[0][0]']
)
block7a_activation (Activation (None, 7, 7, 1152) 0 ['block7a_bn[0][0]']
)
block7a_se_squeeze (GlobalAver (None, 1152) 0 ['block7a_activation[0][0]']
agePooling2D)
block7a_se_reshape (Reshape) (None, 1, 1, 1152) 0 ['block7a_se_squeeze[0][0]']
block7a_se_reduce (Conv2D) (None, 1, 1, 48) 55344 ['block7a_se_reshape[0][0]']
block7a_se_expand (Conv2D) (None, 1, 1, 1152) 56448 ['block7a_se_reduce[0][0]']
block7a_se_excite (Multiply) (None, 7, 7, 1152) 0 ['block7a_activation[0][0]',
'block7a_se_expand[0][0]']
block7a_project_conv (Conv2D) (None, 7, 7, 320) 368640 ['block7a_se_excite[0][0]']
block7a_project_bn (BatchNorma (None, 7, 7, 320) 1280 ['block7a_project_conv[0][0]']
lization)
top_conv (Conv2D) (None, 7, 7, 1280) 409600 ['block7a_project_bn[0][0]']
top_bn (BatchNormalization) (None, 7, 7, 1280) 5120 ['top_conv[0][0]']
top_activation (Activation) (None, 7, 7, 1280) 0 ['top_bn[0][0]']
flatten (Flatten) (None, 62720) 0 ['top_activation[0][0]']
dense (Dense) (None, 1024) 64226304 ['flatten[0][0]']
dropout (Dropout) (None, 1024) 0 ['dense[0][0]']
dense_1 (Dense) (None, 1) 1025 ['dropout[0][0]']
==================================================================================================
Total params: 68,276,893
Trainable params: 64,227,329
Non-trainable params: 4,049,564
__________________________________________________________________________________________________
#get total parameters
model_params = model_final.count_params()
# Specify the optimizer, loss function and evaluation metrics.
model_final.compile(loss='binary_crossentropy', optimizer=tf.keras.optimizers.RMSprop(learning_rate=0.0001), metrics=['accuracy'])
t1 = time.time()
#train the model
eff_history = model_final.fit_generator(train_generator, validation_data = validation_generator, steps_per_epoch = 100, epochs = 10)
fit_time = time.time() - t1
<ipython-input-14-b7f31b017b18>:3: UserWarning: `Model.fit_generator` is deprecated and will be removed in a future version. Please use `Model.fit`, which supports generators. eff_history = model_final.fit_generator(train_generator, validation_data = validation_generator, steps_per_epoch = 100, epochs = 10)
Epoch 1/10 100/100 [==============================] - 739s 7s/step - loss: 0.3183 - accuracy: 0.9113 - val_loss: 0.0838 - val_accuracy: 0.9775 Epoch 2/10 100/100 [==============================] - 394s 4s/step - loss: 0.2768 - accuracy: 0.9275 - val_loss: 0.0838 - val_accuracy: 0.9825 Epoch 3/10 100/100 [==============================] - 284s 3s/step - loss: 0.3022 - accuracy: 0.9200 - val_loss: 0.0572 - val_accuracy: 0.9850 Epoch 4/10 100/100 [==============================] - 203s 2s/step - loss: 0.2296 - accuracy: 0.9451 - val_loss: 0.0614 - val_accuracy: 0.9875 Epoch 5/10 100/100 [==============================] - 154s 2s/step - loss: 0.2132 - accuracy: 0.9436 - val_loss: 0.1199 - val_accuracy: 0.9787 Epoch 6/10 100/100 [==============================] - 122s 1s/step - loss: 0.2683 - accuracy: 0.9420 - val_loss: 0.1028 - val_accuracy: 0.9837 Epoch 7/10 100/100 [==============================] - 91s 914ms/step - loss: 0.2781 - accuracy: 0.9315 - val_loss: 0.1754 - val_accuracy: 0.9787 Epoch 8/10 100/100 [==============================] - 87s 872ms/step - loss: 0.2735 - accuracy: 0.9400 - val_loss: 0.1717 - val_accuracy: 0.9837 Epoch 9/10 100/100 [==============================] - 63s 636ms/step - loss: 0.2872 - accuracy: 0.9375 - val_loss: 0.1673 - val_accuracy: 0.9787 Epoch 10/10 100/100 [==============================] - 54s 538ms/step - loss: 0.3210 - accuracy: 0.9515 - val_loss: 0.2309 - val_accuracy: 0.9775
# time it took to fit the model
print(fit_time)
2195.61941075325
#Plot training and validation accuracy and loss for each epoch
acc = eff_history.history['accuracy']
val_acc = eff_history.history['val_accuracy']
loss = eff_history.history['loss']
val_loss = eff_history.history['val_loss']
epochs = range(1,len(acc) + 1)
plt.plot(epochs,acc,label = 'Training Accuracy')
plt.plot(epochs,val_acc,label = 'Validation Accuracy')
plt.title('Training and Validation Accuracy')
plt.legend()
plt.figure()
plt.plot(epochs,loss,label = 'Training loss')
plt.plot(epochs,val_loss,label = 'Validation Loss')
plt.title('Training and Validation Loss')
plt.legend()
plt.show()
# Test dataset
test_datagen = ImageDataGenerator(rescale=1./255)
test_generator = test_datagen.flow_from_directory(
test_dir,
target_size=(224, 224),
shuffle = False,
class_mode='binary',
batch_size=1)
Found 2023 images belonging to 2 classes.
#Get test length
filenames = test_generator.filenames
nb_samples = len(filenames)
#Predict on test set
predict = model_final.predict_generator(test_generator,steps = nb_samples)
<ipython-input-19-7710eff794cf>:2: UserWarning: `Model.predict_generator` is deprecated and will be removed in a future version. Please use `Model.predict`, which supports generators. predict = model_final.predict_generator(test_generator,steps = nb_samples)
#Get list of prediction results
pred_list = []
for i in predict:
if i > 0.5:
result = 1 #dog
pred_list.append(result)
else:
result = 0 #cat
pred_list.append(result)
#Create dataframe of image ID, image true label, image predicted label
import pandas as pd
image_ids = [name.split('/')[-1] for name in test_generator.filenames]
image_label = [name.split('/')[0] for name in test_generator.filenames]
data = {'id': image_ids, 'label':image_label, 'prediction':pred_list}
data_df = pd.DataFrame(data)
data_df.label.replace(('cats', 'dogs'), (0, 1), inplace=True) # change cat and dog label to 0 or 1
#Get test accuracy score
from sklearn.metrics import accuracy_score, confusion_matrix
test_accuracy = accuracy_score(data_df['label'], data_df['prediction'])
print('Test Accuracy: ', round((test_accuracy * 100), 2), "%")
Test Accuracy: 97.13 %
from sklearn.metrics import classification_report
#Classification Report
print(classification_report(data_df['label'], data_df['prediction']))
precision recall f1-score support
0 0.97 0.97 0.97 1011
1 0.97 0.97 0.97 1012
accuracy 0.97 2023
macro avg 0.97 0.97 0.97 2023
weighted avg 0.97 0.97 0.97 2023
#Create confusion matrix
import seaborn as sns
label = [0, 1] #0 = cat and 1 = dog
cm = confusion_matrix(data_df['label'], data_df['prediction'], labels = label)
#Plot
ax= plt.subplot()
sns.heatmap(cm, annot=True, fmt='g', ax=ax);
# labels, title and ticks
ax.set_xlabel('Predicted labels');ax.set_ylabel('True labels');
ax.set_title('Confusion Matrix');
ax.xaxis.set_ticklabels(["Cat", "Dog"]); ax.yaxis.set_ticklabels(["Cat", "Dog"])
[Text(0, 0.5, 'Cat'), Text(0, 1.5, 'Dog')]
#Create experiment log
ExperimentLog = pd.DataFrame(
columns=[
"Base Model",
"Input Resolution",
"Optimizer",
"Epochs",
"Training Accuracy",
"Validation Accuracy",
"Test Accuracy",
"Fit Time",
"Total Parameters"
]
)
ExperimentLog.loc[len(ExperimentLog)] = [
"EfficientNet B0",
224,
"RMSprop",
10,
max(acc),
max(val_acc),
test_accuracy,
fit_time,
model_params
]
ExperimentLog
| Base Model | Input Resolution | Optimizer | Epochs | Training Accuracy | Validation Accuracy | Test Accuracy | Fit Time | Total Parameters | |
|---|---|---|---|---|---|---|---|---|---|
| 0 | EfficientNet B0 | 224 | RMSprop | 10 | 0.9515 | 0.9875 | 0.97133 | 2195.619411 | 68276893 |
# Specify the optimizer, loss function and evaluation metrics.
model_final.compile(loss='binary_crossentropy', optimizer=tf.keras.optimizers.RMSprop(learning_rate=0.0001, weight_decay=1e-6), metrics=['accuracy'])
t1 = time.time()
#train the model
eff_history = model_final.fit_generator(train_generator, validation_data = validation_generator, steps_per_epoch = 100, epochs = 10)
fit_time = time.time() - t1
<ipython-input-28-b7f31b017b18>:3: UserWarning: `Model.fit_generator` is deprecated and will be removed in a future version. Please use `Model.fit`, which supports generators. eff_history = model_final.fit_generator(train_generator, validation_data = validation_generator, steps_per_epoch = 100, epochs = 10)
Epoch 1/10 100/100 [==============================] - 50s 442ms/step - loss: 0.3316 - accuracy: 0.9420 - val_loss: 0.1937 - val_accuracy: 0.9862 Epoch 2/10 100/100 [==============================] - 48s 485ms/step - loss: 0.2205 - accuracy: 0.9530 - val_loss: 0.1539 - val_accuracy: 0.9875 Epoch 3/10 100/100 [==============================] - 40s 399ms/step - loss: 0.2557 - accuracy: 0.9490 - val_loss: 0.2016 - val_accuracy: 0.9850 Epoch 4/10 100/100 [==============================] - 37s 373ms/step - loss: 0.2714 - accuracy: 0.9455 - val_loss: 0.1942 - val_accuracy: 0.9850 Epoch 5/10 100/100 [==============================] - 34s 343ms/step - loss: 0.2824 - accuracy: 0.9455 - val_loss: 0.2047 - val_accuracy: 0.9837 Epoch 6/10 100/100 [==============================] - 34s 338ms/step - loss: 0.2901 - accuracy: 0.9530 - val_loss: 0.1585 - val_accuracy: 0.9825 Epoch 7/10 100/100 [==============================] - 32s 321ms/step - loss: 0.3372 - accuracy: 0.9485 - val_loss: 0.1716 - val_accuracy: 0.9850 Epoch 8/10 100/100 [==============================] - 32s 323ms/step - loss: 0.3549 - accuracy: 0.9490 - val_loss: 0.1882 - val_accuracy: 0.9862 Epoch 9/10 100/100 [==============================] - 32s 324ms/step - loss: 0.2235 - accuracy: 0.9520 - val_loss: 0.1747 - val_accuracy: 0.9862 Epoch 10/10 100/100 [==============================] - 31s 307ms/step - loss: 0.2472 - accuracy: 0.9562 - val_loss: 0.2063 - val_accuracy: 0.9837
# time it took to fit the model
print(fit_time)
381.655277967453
#Plot training and validation accuracy and loss for each epoch
acc = eff_history.history['accuracy']
val_acc = eff_history.history['val_accuracy']
loss = eff_history.history['loss']
val_loss = eff_history.history['val_loss']
epochs = range(1,len(acc) + 1)
plt.plot(epochs,acc,label = 'Training Accuracy')
plt.plot(epochs,val_acc,label = 'Validation Accuracy')
plt.title('Training and Validation Accuracy')
plt.legend()
plt.figure()
plt.plot(epochs,loss,label = 'Training loss')
plt.plot(epochs,val_loss,label = 'Validation Loss')
plt.title('Training and Validation Loss')
plt.legend()
plt.show()
# Test dataset
test_datagen = ImageDataGenerator(rescale=1./255)
test_generator = test_datagen.flow_from_directory(
test_dir,
target_size=(224, 224),
shuffle = False,
class_mode='binary',
batch_size=1)
Found 2023 images belonging to 2 classes.
#Get test length
filenames = test_generator.filenames
nb_samples = len(filenames)
#Predict on test set
predict = model_final.predict_generator(test_generator,steps = nb_samples)
<ipython-input-33-7710eff794cf>:2: UserWarning: `Model.predict_generator` is deprecated and will be removed in a future version. Please use `Model.predict`, which supports generators. predict = model_final.predict_generator(test_generator,steps = nb_samples)
#Get list of prediction results
pred_list = []
for i in predict:
if i > 0.5:
result = 1 #dog
pred_list.append(result)
else:
result = 0 #cat
pred_list.append(result)
#Create dataframe of image ID, image true label, image predicted label
import pandas as pd
image_ids = [name.split('/')[-1] for name in test_generator.filenames]
image_label = [name.split('/')[0] for name in test_generator.filenames]
data = {'id': image_ids, 'label':image_label, 'prediction':pred_list}
data_df = pd.DataFrame(data)
data_df.label.replace(('cats', 'dogs'), (0, 1), inplace=True) # change cat and dog label to 0 or 1
#Get test accuracy score
from sklearn.metrics import accuracy_score, confusion_matrix
test_accuracy = accuracy_score(data_df['label'], data_df['prediction'])
print('Test Accuracy: ', round((test_accuracy * 100), 2), "%")
Test Accuracy: 97.48 %
from sklearn.metrics import classification_report
#Classification Report
print(classification_report(data_df['label'], data_df['prediction']))
precision recall f1-score support
0 0.97 0.98 0.97 1011
1 0.98 0.97 0.97 1012
accuracy 0.97 2023
macro avg 0.97 0.97 0.97 2023
weighted avg 0.97 0.97 0.97 2023
#Create confusion matrix
import seaborn as sns
label = [0, 1] #0 = cat and 1 = dog
cm = confusion_matrix(data_df['label'], data_df['prediction'], labels = label)
#Plot
ax= plt.subplot()
sns.heatmap(cm, annot=True, fmt='g', ax=ax);
# labels, title and ticks
ax.set_xlabel('Predicted labels');ax.set_ylabel('True labels');
ax.set_title('Confusion Matrix');
ax.xaxis.set_ticklabels(["Cat", "Dog"]); ax.yaxis.set_ticklabels(["Cat", "Dog"])
[Text(0, 0.5, 'Cat'), Text(0, 1.5, 'Dog')]
ExperimentLog.loc[len(ExperimentLog)] = [
"EfficientNet B0 with decay",
224,
"RMSprop",
10,
max(acc),
max(val_acc),
test_accuracy,
fit_time,
model_params
]
ExperimentLog
| Base Model | Input Resolution | Optimizer | Epochs | Training Accuracy | Validation Accuracy | Test Accuracy | Fit Time | Total Parameters | |
|---|---|---|---|---|---|---|---|---|---|
| 0 | EfficientNet B0 | 224 | RMSprop | 10 | 0.951500 | 0.9875 | 0.97133 | 2195.619411 | 68276893 |
| 1 | EfficientNet B0 with decay | 224 | RMSprop | 10 | 0.956171 | 0.9875 | 0.97479 | 381.655278 | 68276893 |
# Add rescaling and augmentation to ImageDataGenerator for the training set
train_datagen = ImageDataGenerator(rescale = 1./255., rotation_range = 40, width_shift_range = 0.2, height_shift_range = 0.2, shear_range = 0.2, zoom_range = 0.2, horizontal_flip = True, validation_split=0.1) # set validation split
# Rescale validation set. No augmentation on the validation set.
validation_datagen = ImageDataGenerator(rescale = 1./255.,validation_split=0.1) # set validation split
#Read images directly from directory.
train_generator = train_datagen.flow_from_directory(train_dir, seed = 42, shuffle = True, batch_size = 20, class_mode = 'binary', target_size = (240, 240), subset='training') #set as training data
validation_generator = validation_datagen.flow_from_directory(train_dir, seed = 42, shuffle = True, batch_size = 20, class_mode = 'binary', target_size = (240, 240), subset='validation') # same directory as training data. Set as validation data
Found 7205 images belonging to 2 classes. Found 800 images belonging to 2 classes.
#Instantiates the EfficientNet architecture
base_model = efn.EfficientNetB1(input_shape = (240, 240, 3), include_top = False, weights = 'imagenet')
Downloading data from https://github.com/Callidior/keras-applications/releases/download/efficientnet/efficientnet-b1_weights_tf_dim_ordering_tf_kernels_autoaugment_notop.h5 27164032/27164032 [==============================] - 6s 0us/step
# Set trainable attribute to false for all of the base model layers
for layer in base_model.layers:
layer.trainable = False
#Build on top of existing base model.
x = base_model.output
x = layers.Flatten()(x) #convert to 1D array
x = layers.Dense(1024, activation="relu")(x) #fully connected layer with 1,024 hidden units and ReLU activation
x = layers.Dropout(0.5)(x) #Drops 50% of inputs to zero at each training iteration (prevents overfitting)
# Add a final sigmoid layer with 1 node for classification output (probability between 0 and 1)
predictions = layers.Dense(1, activation="sigmoid")(x)
model_final = Model(inputs = base_model.input, outputs = predictions)
#Print model summary
from torchsummary import summary
model_sum = model_final.summary()
Model: "model_1"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_2 (InputLayer) [(None, 240, 240, 3 0 []
)]
stem_conv (Conv2D) (None, 120, 120, 32 864 ['input_2[0][0]']
)
stem_bn (BatchNormalization) (None, 120, 120, 32 128 ['stem_conv[0][0]']
)
stem_activation (Activation) (None, 120, 120, 32 0 ['stem_bn[0][0]']
)
block1a_dwconv (DepthwiseConv2 (None, 120, 120, 32 288 ['stem_activation[0][0]']
D) )
block1a_bn (BatchNormalization (None, 120, 120, 32 128 ['block1a_dwconv[0][0]']
) )
block1a_activation (Activation (None, 120, 120, 32 0 ['block1a_bn[0][0]']
) )
block1a_se_squeeze (GlobalAver (None, 32) 0 ['block1a_activation[0][0]']
agePooling2D)
block1a_se_reshape (Reshape) (None, 1, 1, 32) 0 ['block1a_se_squeeze[0][0]']
block1a_se_reduce (Conv2D) (None, 1, 1, 8) 264 ['block1a_se_reshape[0][0]']
block1a_se_expand (Conv2D) (None, 1, 1, 32) 288 ['block1a_se_reduce[0][0]']
block1a_se_excite (Multiply) (None, 120, 120, 32 0 ['block1a_activation[0][0]',
) 'block1a_se_expand[0][0]']
block1a_project_conv (Conv2D) (None, 120, 120, 16 512 ['block1a_se_excite[0][0]']
)
block1a_project_bn (BatchNorma (None, 120, 120, 16 64 ['block1a_project_conv[0][0]']
lization) )
block1b_dwconv (DepthwiseConv2 (None, 120, 120, 16 144 ['block1a_project_bn[0][0]']
D) )
block1b_bn (BatchNormalization (None, 120, 120, 16 64 ['block1b_dwconv[0][0]']
) )
block1b_activation (Activation (None, 120, 120, 16 0 ['block1b_bn[0][0]']
) )
block1b_se_squeeze (GlobalAver (None, 16) 0 ['block1b_activation[0][0]']
agePooling2D)
block1b_se_reshape (Reshape) (None, 1, 1, 16) 0 ['block1b_se_squeeze[0][0]']
block1b_se_reduce (Conv2D) (None, 1, 1, 4) 68 ['block1b_se_reshape[0][0]']
block1b_se_expand (Conv2D) (None, 1, 1, 16) 80 ['block1b_se_reduce[0][0]']
block1b_se_excite (Multiply) (None, 120, 120, 16 0 ['block1b_activation[0][0]',
) 'block1b_se_expand[0][0]']
block1b_project_conv (Conv2D) (None, 120, 120, 16 256 ['block1b_se_excite[0][0]']
)
block1b_project_bn (BatchNorma (None, 120, 120, 16 64 ['block1b_project_conv[0][0]']
lization) )
block1b_drop (FixedDropout) (None, 120, 120, 16 0 ['block1b_project_bn[0][0]']
)
block1b_add (Add) (None, 120, 120, 16 0 ['block1b_drop[0][0]',
) 'block1a_project_bn[0][0]']
block2a_expand_conv (Conv2D) (None, 120, 120, 96 1536 ['block1b_add[0][0]']
)
block2a_expand_bn (BatchNormal (None, 120, 120, 96 384 ['block2a_expand_conv[0][0]']
ization) )
block2a_expand_activation (Act (None, 120, 120, 96 0 ['block2a_expand_bn[0][0]']
ivation) )
block2a_dwconv (DepthwiseConv2 (None, 60, 60, 96) 864 ['block2a_expand_activation[0][0]
D) ']
block2a_bn (BatchNormalization (None, 60, 60, 96) 384 ['block2a_dwconv[0][0]']
)
block2a_activation (Activation (None, 60, 60, 96) 0 ['block2a_bn[0][0]']
)
block2a_se_squeeze (GlobalAver (None, 96) 0 ['block2a_activation[0][0]']
agePooling2D)
block2a_se_reshape (Reshape) (None, 1, 1, 96) 0 ['block2a_se_squeeze[0][0]']
block2a_se_reduce (Conv2D) (None, 1, 1, 4) 388 ['block2a_se_reshape[0][0]']
block2a_se_expand (Conv2D) (None, 1, 1, 96) 480 ['block2a_se_reduce[0][0]']
block2a_se_excite (Multiply) (None, 60, 60, 96) 0 ['block2a_activation[0][0]',
'block2a_se_expand[0][0]']
block2a_project_conv (Conv2D) (None, 60, 60, 24) 2304 ['block2a_se_excite[0][0]']
block2a_project_bn (BatchNorma (None, 60, 60, 24) 96 ['block2a_project_conv[0][0]']
lization)
block2b_expand_conv (Conv2D) (None, 60, 60, 144) 3456 ['block2a_project_bn[0][0]']
block2b_expand_bn (BatchNormal (None, 60, 60, 144) 576 ['block2b_expand_conv[0][0]']
ization)
block2b_expand_activation (Act (None, 60, 60, 144) 0 ['block2b_expand_bn[0][0]']
ivation)
block2b_dwconv (DepthwiseConv2 (None, 60, 60, 144) 1296 ['block2b_expand_activation[0][0]
D) ']
block2b_bn (BatchNormalization (None, 60, 60, 144) 576 ['block2b_dwconv[0][0]']
)
block2b_activation (Activation (None, 60, 60, 144) 0 ['block2b_bn[0][0]']
)
block2b_se_squeeze (GlobalAver (None, 144) 0 ['block2b_activation[0][0]']
agePooling2D)
block2b_se_reshape (Reshape) (None, 1, 1, 144) 0 ['block2b_se_squeeze[0][0]']
block2b_se_reduce (Conv2D) (None, 1, 1, 6) 870 ['block2b_se_reshape[0][0]']
block2b_se_expand (Conv2D) (None, 1, 1, 144) 1008 ['block2b_se_reduce[0][0]']
block2b_se_excite (Multiply) (None, 60, 60, 144) 0 ['block2b_activation[0][0]',
'block2b_se_expand[0][0]']
block2b_project_conv (Conv2D) (None, 60, 60, 24) 3456 ['block2b_se_excite[0][0]']
block2b_project_bn (BatchNorma (None, 60, 60, 24) 96 ['block2b_project_conv[0][0]']
lization)
block2b_drop (FixedDropout) (None, 60, 60, 24) 0 ['block2b_project_bn[0][0]']
block2b_add (Add) (None, 60, 60, 24) 0 ['block2b_drop[0][0]',
'block2a_project_bn[0][0]']
block2c_expand_conv (Conv2D) (None, 60, 60, 144) 3456 ['block2b_add[0][0]']
block2c_expand_bn (BatchNormal (None, 60, 60, 144) 576 ['block2c_expand_conv[0][0]']
ization)
block2c_expand_activation (Act (None, 60, 60, 144) 0 ['block2c_expand_bn[0][0]']
ivation)
block2c_dwconv (DepthwiseConv2 (None, 60, 60, 144) 1296 ['block2c_expand_activation[0][0]
D) ']
block2c_bn (BatchNormalization (None, 60, 60, 144) 576 ['block2c_dwconv[0][0]']
)
block2c_activation (Activation (None, 60, 60, 144) 0 ['block2c_bn[0][0]']
)
block2c_se_squeeze (GlobalAver (None, 144) 0 ['block2c_activation[0][0]']
agePooling2D)
block2c_se_reshape (Reshape) (None, 1, 1, 144) 0 ['block2c_se_squeeze[0][0]']
block2c_se_reduce (Conv2D) (None, 1, 1, 6) 870 ['block2c_se_reshape[0][0]']
block2c_se_expand (Conv2D) (None, 1, 1, 144) 1008 ['block2c_se_reduce[0][0]']
block2c_se_excite (Multiply) (None, 60, 60, 144) 0 ['block2c_activation[0][0]',
'block2c_se_expand[0][0]']
block2c_project_conv (Conv2D) (None, 60, 60, 24) 3456 ['block2c_se_excite[0][0]']
block2c_project_bn (BatchNorma (None, 60, 60, 24) 96 ['block2c_project_conv[0][0]']
lization)
block2c_drop (FixedDropout) (None, 60, 60, 24) 0 ['block2c_project_bn[0][0]']
block2c_add (Add) (None, 60, 60, 24) 0 ['block2c_drop[0][0]',
'block2b_add[0][0]']
block3a_expand_conv (Conv2D) (None, 60, 60, 144) 3456 ['block2c_add[0][0]']
block3a_expand_bn (BatchNormal (None, 60, 60, 144) 576 ['block3a_expand_conv[0][0]']
ization)
block3a_expand_activation (Act (None, 60, 60, 144) 0 ['block3a_expand_bn[0][0]']
ivation)
block3a_dwconv (DepthwiseConv2 (None, 30, 30, 144) 3600 ['block3a_expand_activation[0][0]
D) ']
block3a_bn (BatchNormalization (None, 30, 30, 144) 576 ['block3a_dwconv[0][0]']
)
block3a_activation (Activation (None, 30, 30, 144) 0 ['block3a_bn[0][0]']
)
block3a_se_squeeze (GlobalAver (None, 144) 0 ['block3a_activation[0][0]']
agePooling2D)
block3a_se_reshape (Reshape) (None, 1, 1, 144) 0 ['block3a_se_squeeze[0][0]']
block3a_se_reduce (Conv2D) (None, 1, 1, 6) 870 ['block3a_se_reshape[0][0]']
block3a_se_expand (Conv2D) (None, 1, 1, 144) 1008 ['block3a_se_reduce[0][0]']
block3a_se_excite (Multiply) (None, 30, 30, 144) 0 ['block3a_activation[0][0]',
'block3a_se_expand[0][0]']
block3a_project_conv (Conv2D) (None, 30, 30, 40) 5760 ['block3a_se_excite[0][0]']
block3a_project_bn (BatchNorma (None, 30, 30, 40) 160 ['block3a_project_conv[0][0]']
lization)
block3b_expand_conv (Conv2D) (None, 30, 30, 240) 9600 ['block3a_project_bn[0][0]']
block3b_expand_bn (BatchNormal (None, 30, 30, 240) 960 ['block3b_expand_conv[0][0]']
ization)
block3b_expand_activation (Act (None, 30, 30, 240) 0 ['block3b_expand_bn[0][0]']
ivation)
block3b_dwconv (DepthwiseConv2 (None, 30, 30, 240) 6000 ['block3b_expand_activation[0][0]
D) ']
block3b_bn (BatchNormalization (None, 30, 30, 240) 960 ['block3b_dwconv[0][0]']
)
block3b_activation (Activation (None, 30, 30, 240) 0 ['block3b_bn[0][0]']
)
block3b_se_squeeze (GlobalAver (None, 240) 0 ['block3b_activation[0][0]']
agePooling2D)
block3b_se_reshape (Reshape) (None, 1, 1, 240) 0 ['block3b_se_squeeze[0][0]']
block3b_se_reduce (Conv2D) (None, 1, 1, 10) 2410 ['block3b_se_reshape[0][0]']
block3b_se_expand (Conv2D) (None, 1, 1, 240) 2640 ['block3b_se_reduce[0][0]']
block3b_se_excite (Multiply) (None, 30, 30, 240) 0 ['block3b_activation[0][0]',
'block3b_se_expand[0][0]']
block3b_project_conv (Conv2D) (None, 30, 30, 40) 9600 ['block3b_se_excite[0][0]']
block3b_project_bn (BatchNorma (None, 30, 30, 40) 160 ['block3b_project_conv[0][0]']
lization)
block3b_drop (FixedDropout) (None, 30, 30, 40) 0 ['block3b_project_bn[0][0]']
block3b_add (Add) (None, 30, 30, 40) 0 ['block3b_drop[0][0]',
'block3a_project_bn[0][0]']
block3c_expand_conv (Conv2D) (None, 30, 30, 240) 9600 ['block3b_add[0][0]']
block3c_expand_bn (BatchNormal (None, 30, 30, 240) 960 ['block3c_expand_conv[0][0]']
ization)
block3c_expand_activation (Act (None, 30, 30, 240) 0 ['block3c_expand_bn[0][0]']
ivation)
block3c_dwconv (DepthwiseConv2 (None, 30, 30, 240) 6000 ['block3c_expand_activation[0][0]
D) ']
block3c_bn (BatchNormalization (None, 30, 30, 240) 960 ['block3c_dwconv[0][0]']
)
block3c_activation (Activation (None, 30, 30, 240) 0 ['block3c_bn[0][0]']
)
block3c_se_squeeze (GlobalAver (None, 240) 0 ['block3c_activation[0][0]']
agePooling2D)
block3c_se_reshape (Reshape) (None, 1, 1, 240) 0 ['block3c_se_squeeze[0][0]']
block3c_se_reduce (Conv2D) (None, 1, 1, 10) 2410 ['block3c_se_reshape[0][0]']
block3c_se_expand (Conv2D) (None, 1, 1, 240) 2640 ['block3c_se_reduce[0][0]']
block3c_se_excite (Multiply) (None, 30, 30, 240) 0 ['block3c_activation[0][0]',
'block3c_se_expand[0][0]']
block3c_project_conv (Conv2D) (None, 30, 30, 40) 9600 ['block3c_se_excite[0][0]']
block3c_project_bn (BatchNorma (None, 30, 30, 40) 160 ['block3c_project_conv[0][0]']
lization)
block3c_drop (FixedDropout) (None, 30, 30, 40) 0 ['block3c_project_bn[0][0]']
block3c_add (Add) (None, 30, 30, 40) 0 ['block3c_drop[0][0]',
'block3b_add[0][0]']
block4a_expand_conv (Conv2D) (None, 30, 30, 240) 9600 ['block3c_add[0][0]']
block4a_expand_bn (BatchNormal (None, 30, 30, 240) 960 ['block4a_expand_conv[0][0]']
ization)
block4a_expand_activation (Act (None, 30, 30, 240) 0 ['block4a_expand_bn[0][0]']
ivation)
block4a_dwconv (DepthwiseConv2 (None, 15, 15, 240) 2160 ['block4a_expand_activation[0][0]
D) ']
block4a_bn (BatchNormalization (None, 15, 15, 240) 960 ['block4a_dwconv[0][0]']
)
block4a_activation (Activation (None, 15, 15, 240) 0 ['block4a_bn[0][0]']
)
block4a_se_squeeze (GlobalAver (None, 240) 0 ['block4a_activation[0][0]']
agePooling2D)
block4a_se_reshape (Reshape) (None, 1, 1, 240) 0 ['block4a_se_squeeze[0][0]']
block4a_se_reduce (Conv2D) (None, 1, 1, 10) 2410 ['block4a_se_reshape[0][0]']
block4a_se_expand (Conv2D) (None, 1, 1, 240) 2640 ['block4a_se_reduce[0][0]']
block4a_se_excite (Multiply) (None, 15, 15, 240) 0 ['block4a_activation[0][0]',
'block4a_se_expand[0][0]']
block4a_project_conv (Conv2D) (None, 15, 15, 80) 19200 ['block4a_se_excite[0][0]']
block4a_project_bn (BatchNorma (None, 15, 15, 80) 320 ['block4a_project_conv[0][0]']
lization)
block4b_expand_conv (Conv2D) (None, 15, 15, 480) 38400 ['block4a_project_bn[0][0]']
block4b_expand_bn (BatchNormal (None, 15, 15, 480) 1920 ['block4b_expand_conv[0][0]']
ization)
block4b_expand_activation (Act (None, 15, 15, 480) 0 ['block4b_expand_bn[0][0]']
ivation)
block4b_dwconv (DepthwiseConv2 (None, 15, 15, 480) 4320 ['block4b_expand_activation[0][0]
D) ']
block4b_bn (BatchNormalization (None, 15, 15, 480) 1920 ['block4b_dwconv[0][0]']
)
block4b_activation (Activation (None, 15, 15, 480) 0 ['block4b_bn[0][0]']
)
block4b_se_squeeze (GlobalAver (None, 480) 0 ['block4b_activation[0][0]']
agePooling2D)
block4b_se_reshape (Reshape) (None, 1, 1, 480) 0 ['block4b_se_squeeze[0][0]']
block4b_se_reduce (Conv2D) (None, 1, 1, 20) 9620 ['block4b_se_reshape[0][0]']
block4b_se_expand (Conv2D) (None, 1, 1, 480) 10080 ['block4b_se_reduce[0][0]']
block4b_se_excite (Multiply) (None, 15, 15, 480) 0 ['block4b_activation[0][0]',
'block4b_se_expand[0][0]']
block4b_project_conv (Conv2D) (None, 15, 15, 80) 38400 ['block4b_se_excite[0][0]']
block4b_project_bn (BatchNorma (None, 15, 15, 80) 320 ['block4b_project_conv[0][0]']
lization)
block4b_drop (FixedDropout) (None, 15, 15, 80) 0 ['block4b_project_bn[0][0]']
block4b_add (Add) (None, 15, 15, 80) 0 ['block4b_drop[0][0]',
'block4a_project_bn[0][0]']
block4c_expand_conv (Conv2D) (None, 15, 15, 480) 38400 ['block4b_add[0][0]']
block4c_expand_bn (BatchNormal (None, 15, 15, 480) 1920 ['block4c_expand_conv[0][0]']
ization)
block4c_expand_activation (Act (None, 15, 15, 480) 0 ['block4c_expand_bn[0][0]']
ivation)
block4c_dwconv (DepthwiseConv2 (None, 15, 15, 480) 4320 ['block4c_expand_activation[0][0]
D) ']
block4c_bn (BatchNormalization (None, 15, 15, 480) 1920 ['block4c_dwconv[0][0]']
)
block4c_activation (Activation (None, 15, 15, 480) 0 ['block4c_bn[0][0]']
)
block4c_se_squeeze (GlobalAver (None, 480) 0 ['block4c_activation[0][0]']
agePooling2D)
block4c_se_reshape (Reshape) (None, 1, 1, 480) 0 ['block4c_se_squeeze[0][0]']
block4c_se_reduce (Conv2D) (None, 1, 1, 20) 9620 ['block4c_se_reshape[0][0]']
block4c_se_expand (Conv2D) (None, 1, 1, 480) 10080 ['block4c_se_reduce[0][0]']
block4c_se_excite (Multiply) (None, 15, 15, 480) 0 ['block4c_activation[0][0]',
'block4c_se_expand[0][0]']
block4c_project_conv (Conv2D) (None, 15, 15, 80) 38400 ['block4c_se_excite[0][0]']
block4c_project_bn (BatchNorma (None, 15, 15, 80) 320 ['block4c_project_conv[0][0]']
lization)
block4c_drop (FixedDropout) (None, 15, 15, 80) 0 ['block4c_project_bn[0][0]']
block4c_add (Add) (None, 15, 15, 80) 0 ['block4c_drop[0][0]',
'block4b_add[0][0]']
block4d_expand_conv (Conv2D) (None, 15, 15, 480) 38400 ['block4c_add[0][0]']
block4d_expand_bn (BatchNormal (None, 15, 15, 480) 1920 ['block4d_expand_conv[0][0]']
ization)
block4d_expand_activation (Act (None, 15, 15, 480) 0 ['block4d_expand_bn[0][0]']
ivation)
block4d_dwconv (DepthwiseConv2 (None, 15, 15, 480) 4320 ['block4d_expand_activation[0][0]
D) ']
block4d_bn (BatchNormalization (None, 15, 15, 480) 1920 ['block4d_dwconv[0][0]']
)
block4d_activation (Activation (None, 15, 15, 480) 0 ['block4d_bn[0][0]']
)
block4d_se_squeeze (GlobalAver (None, 480) 0 ['block4d_activation[0][0]']
agePooling2D)
block4d_se_reshape (Reshape) (None, 1, 1, 480) 0 ['block4d_se_squeeze[0][0]']
block4d_se_reduce (Conv2D) (None, 1, 1, 20) 9620 ['block4d_se_reshape[0][0]']
block4d_se_expand (Conv2D) (None, 1, 1, 480) 10080 ['block4d_se_reduce[0][0]']
block4d_se_excite (Multiply) (None, 15, 15, 480) 0 ['block4d_activation[0][0]',
'block4d_se_expand[0][0]']
block4d_project_conv (Conv2D) (None, 15, 15, 80) 38400 ['block4d_se_excite[0][0]']
block4d_project_bn (BatchNorma (None, 15, 15, 80) 320 ['block4d_project_conv[0][0]']
lization)
block4d_drop (FixedDropout) (None, 15, 15, 80) 0 ['block4d_project_bn[0][0]']
block4d_add (Add) (None, 15, 15, 80) 0 ['block4d_drop[0][0]',
'block4c_add[0][0]']
block5a_expand_conv (Conv2D) (None, 15, 15, 480) 38400 ['block4d_add[0][0]']
block5a_expand_bn (BatchNormal (None, 15, 15, 480) 1920 ['block5a_expand_conv[0][0]']
ization)
block5a_expand_activation (Act (None, 15, 15, 480) 0 ['block5a_expand_bn[0][0]']
ivation)
block5a_dwconv (DepthwiseConv2 (None, 15, 15, 480) 12000 ['block5a_expand_activation[0][0]
D) ']
block5a_bn (BatchNormalization (None, 15, 15, 480) 1920 ['block5a_dwconv[0][0]']
)
block5a_activation (Activation (None, 15, 15, 480) 0 ['block5a_bn[0][0]']
)
block5a_se_squeeze (GlobalAver (None, 480) 0 ['block5a_activation[0][0]']
agePooling2D)
block5a_se_reshape (Reshape) (None, 1, 1, 480) 0 ['block5a_se_squeeze[0][0]']
block5a_se_reduce (Conv2D) (None, 1, 1, 20) 9620 ['block5a_se_reshape[0][0]']
block5a_se_expand (Conv2D) (None, 1, 1, 480) 10080 ['block5a_se_reduce[0][0]']
block5a_se_excite (Multiply) (None, 15, 15, 480) 0 ['block5a_activation[0][0]',
'block5a_se_expand[0][0]']
block5a_project_conv (Conv2D) (None, 15, 15, 112) 53760 ['block5a_se_excite[0][0]']
block5a_project_bn (BatchNorma (None, 15, 15, 112) 448 ['block5a_project_conv[0][0]']
lization)
block5b_expand_conv (Conv2D) (None, 15, 15, 672) 75264 ['block5a_project_bn[0][0]']
block5b_expand_bn (BatchNormal (None, 15, 15, 672) 2688 ['block5b_expand_conv[0][0]']
ization)
block5b_expand_activation (Act (None, 15, 15, 672) 0 ['block5b_expand_bn[0][0]']
ivation)
block5b_dwconv (DepthwiseConv2 (None, 15, 15, 672) 16800 ['block5b_expand_activation[0][0]
D) ']
block5b_bn (BatchNormalization (None, 15, 15, 672) 2688 ['block5b_dwconv[0][0]']
)
block5b_activation (Activation (None, 15, 15, 672) 0 ['block5b_bn[0][0]']
)
block5b_se_squeeze (GlobalAver (None, 672) 0 ['block5b_activation[0][0]']
agePooling2D)
block5b_se_reshape (Reshape) (None, 1, 1, 672) 0 ['block5b_se_squeeze[0][0]']
block5b_se_reduce (Conv2D) (None, 1, 1, 28) 18844 ['block5b_se_reshape[0][0]']
block5b_se_expand (Conv2D) (None, 1, 1, 672) 19488 ['block5b_se_reduce[0][0]']
block5b_se_excite (Multiply) (None, 15, 15, 672) 0 ['block5b_activation[0][0]',
'block5b_se_expand[0][0]']
block5b_project_conv (Conv2D) (None, 15, 15, 112) 75264 ['block5b_se_excite[0][0]']
block5b_project_bn (BatchNorma (None, 15, 15, 112) 448 ['block5b_project_conv[0][0]']
lization)
block5b_drop (FixedDropout) (None, 15, 15, 112) 0 ['block5b_project_bn[0][0]']
block5b_add (Add) (None, 15, 15, 112) 0 ['block5b_drop[0][0]',
'block5a_project_bn[0][0]']
block5c_expand_conv (Conv2D) (None, 15, 15, 672) 75264 ['block5b_add[0][0]']
block5c_expand_bn (BatchNormal (None, 15, 15, 672) 2688 ['block5c_expand_conv[0][0]']
ization)
block5c_expand_activation (Act (None, 15, 15, 672) 0 ['block5c_expand_bn[0][0]']
ivation)
block5c_dwconv (DepthwiseConv2 (None, 15, 15, 672) 16800 ['block5c_expand_activation[0][0]
D) ']
block5c_bn (BatchNormalization (None, 15, 15, 672) 2688 ['block5c_dwconv[0][0]']
)
block5c_activation (Activation (None, 15, 15, 672) 0 ['block5c_bn[0][0]']
)
block5c_se_squeeze (GlobalAver (None, 672) 0 ['block5c_activation[0][0]']
agePooling2D)
block5c_se_reshape (Reshape) (None, 1, 1, 672) 0 ['block5c_se_squeeze[0][0]']
block5c_se_reduce (Conv2D) (None, 1, 1, 28) 18844 ['block5c_se_reshape[0][0]']
block5c_se_expand (Conv2D) (None, 1, 1, 672) 19488 ['block5c_se_reduce[0][0]']
block5c_se_excite (Multiply) (None, 15, 15, 672) 0 ['block5c_activation[0][0]',
'block5c_se_expand[0][0]']
block5c_project_conv (Conv2D) (None, 15, 15, 112) 75264 ['block5c_se_excite[0][0]']
block5c_project_bn (BatchNorma (None, 15, 15, 112) 448 ['block5c_project_conv[0][0]']
lization)
block5c_drop (FixedDropout) (None, 15, 15, 112) 0 ['block5c_project_bn[0][0]']
block5c_add (Add) (None, 15, 15, 112) 0 ['block5c_drop[0][0]',
'block5b_add[0][0]']
block5d_expand_conv (Conv2D) (None, 15, 15, 672) 75264 ['block5c_add[0][0]']
block5d_expand_bn (BatchNormal (None, 15, 15, 672) 2688 ['block5d_expand_conv[0][0]']
ization)
block5d_expand_activation (Act (None, 15, 15, 672) 0 ['block5d_expand_bn[0][0]']
ivation)
block5d_dwconv (DepthwiseConv2 (None, 15, 15, 672) 16800 ['block5d_expand_activation[0][0]
D) ']
block5d_bn (BatchNormalization (None, 15, 15, 672) 2688 ['block5d_dwconv[0][0]']
)
block5d_activation (Activation (None, 15, 15, 672) 0 ['block5d_bn[0][0]']
)
block5d_se_squeeze (GlobalAver (None, 672) 0 ['block5d_activation[0][0]']
agePooling2D)
block5d_se_reshape (Reshape) (None, 1, 1, 672) 0 ['block5d_se_squeeze[0][0]']
block5d_se_reduce (Conv2D) (None, 1, 1, 28) 18844 ['block5d_se_reshape[0][0]']
block5d_se_expand (Conv2D) (None, 1, 1, 672) 19488 ['block5d_se_reduce[0][0]']
block5d_se_excite (Multiply) (None, 15, 15, 672) 0 ['block5d_activation[0][0]',
'block5d_se_expand[0][0]']
block5d_project_conv (Conv2D) (None, 15, 15, 112) 75264 ['block5d_se_excite[0][0]']
block5d_project_bn (BatchNorma (None, 15, 15, 112) 448 ['block5d_project_conv[0][0]']
lization)
block5d_drop (FixedDropout) (None, 15, 15, 112) 0 ['block5d_project_bn[0][0]']
block5d_add (Add) (None, 15, 15, 112) 0 ['block5d_drop[0][0]',
'block5c_add[0][0]']
block6a_expand_conv (Conv2D) (None, 15, 15, 672) 75264 ['block5d_add[0][0]']
block6a_expand_bn (BatchNormal (None, 15, 15, 672) 2688 ['block6a_expand_conv[0][0]']
ization)
block6a_expand_activation (Act (None, 15, 15, 672) 0 ['block6a_expand_bn[0][0]']
ivation)
block6a_dwconv (DepthwiseConv2 (None, 8, 8, 672) 16800 ['block6a_expand_activation[0][0]
D) ']
block6a_bn (BatchNormalization (None, 8, 8, 672) 2688 ['block6a_dwconv[0][0]']
)
block6a_activation (Activation (None, 8, 8, 672) 0 ['block6a_bn[0][0]']
)
block6a_se_squeeze (GlobalAver (None, 672) 0 ['block6a_activation[0][0]']
agePooling2D)
block6a_se_reshape (Reshape) (None, 1, 1, 672) 0 ['block6a_se_squeeze[0][0]']
block6a_se_reduce (Conv2D) (None, 1, 1, 28) 18844 ['block6a_se_reshape[0][0]']
block6a_se_expand (Conv2D) (None, 1, 1, 672) 19488 ['block6a_se_reduce[0][0]']
block6a_se_excite (Multiply) (None, 8, 8, 672) 0 ['block6a_activation[0][0]',
'block6a_se_expand[0][0]']
block6a_project_conv (Conv2D) (None, 8, 8, 192) 129024 ['block6a_se_excite[0][0]']
block6a_project_bn (BatchNorma (None, 8, 8, 192) 768 ['block6a_project_conv[0][0]']
lization)
block6b_expand_conv (Conv2D) (None, 8, 8, 1152) 221184 ['block6a_project_bn[0][0]']
block6b_expand_bn (BatchNormal (None, 8, 8, 1152) 4608 ['block6b_expand_conv[0][0]']
ization)
block6b_expand_activation (Act (None, 8, 8, 1152) 0 ['block6b_expand_bn[0][0]']
ivation)
block6b_dwconv (DepthwiseConv2 (None, 8, 8, 1152) 28800 ['block6b_expand_activation[0][0]
D) ']
block6b_bn (BatchNormalization (None, 8, 8, 1152) 4608 ['block6b_dwconv[0][0]']
)
block6b_activation (Activation (None, 8, 8, 1152) 0 ['block6b_bn[0][0]']
)
block6b_se_squeeze (GlobalAver (None, 1152) 0 ['block6b_activation[0][0]']
agePooling2D)
block6b_se_reshape (Reshape) (None, 1, 1, 1152) 0 ['block6b_se_squeeze[0][0]']
block6b_se_reduce (Conv2D) (None, 1, 1, 48) 55344 ['block6b_se_reshape[0][0]']
block6b_se_expand (Conv2D) (None, 1, 1, 1152) 56448 ['block6b_se_reduce[0][0]']
block6b_se_excite (Multiply) (None, 8, 8, 1152) 0 ['block6b_activation[0][0]',
'block6b_se_expand[0][0]']
block6b_project_conv (Conv2D) (None, 8, 8, 192) 221184 ['block6b_se_excite[0][0]']
block6b_project_bn (BatchNorma (None, 8, 8, 192) 768 ['block6b_project_conv[0][0]']
lization)
block6b_drop (FixedDropout) (None, 8, 8, 192) 0 ['block6b_project_bn[0][0]']
block6b_add (Add) (None, 8, 8, 192) 0 ['block6b_drop[0][0]',
'block6a_project_bn[0][0]']
block6c_expand_conv (Conv2D) (None, 8, 8, 1152) 221184 ['block6b_add[0][0]']
block6c_expand_bn (BatchNormal (None, 8, 8, 1152) 4608 ['block6c_expand_conv[0][0]']
ization)
block6c_expand_activation (Act (None, 8, 8, 1152) 0 ['block6c_expand_bn[0][0]']
ivation)
block6c_dwconv (DepthwiseConv2 (None, 8, 8, 1152) 28800 ['block6c_expand_activation[0][0]
D) ']
block6c_bn (BatchNormalization (None, 8, 8, 1152) 4608 ['block6c_dwconv[0][0]']
)
block6c_activation (Activation (None, 8, 8, 1152) 0 ['block6c_bn[0][0]']
)
block6c_se_squeeze (GlobalAver (None, 1152) 0 ['block6c_activation[0][0]']
agePooling2D)
block6c_se_reshape (Reshape) (None, 1, 1, 1152) 0 ['block6c_se_squeeze[0][0]']
block6c_se_reduce (Conv2D) (None, 1, 1, 48) 55344 ['block6c_se_reshape[0][0]']
block6c_se_expand (Conv2D) (None, 1, 1, 1152) 56448 ['block6c_se_reduce[0][0]']
block6c_se_excite (Multiply) (None, 8, 8, 1152) 0 ['block6c_activation[0][0]',
'block6c_se_expand[0][0]']
block6c_project_conv (Conv2D) (None, 8, 8, 192) 221184 ['block6c_se_excite[0][0]']
block6c_project_bn (BatchNorma (None, 8, 8, 192) 768 ['block6c_project_conv[0][0]']
lization)
block6c_drop (FixedDropout) (None, 8, 8, 192) 0 ['block6c_project_bn[0][0]']
block6c_add (Add) (None, 8, 8, 192) 0 ['block6c_drop[0][0]',
'block6b_add[0][0]']
block6d_expand_conv (Conv2D) (None, 8, 8, 1152) 221184 ['block6c_add[0][0]']
block6d_expand_bn (BatchNormal (None, 8, 8, 1152) 4608 ['block6d_expand_conv[0][0]']
ization)
block6d_expand_activation (Act (None, 8, 8, 1152) 0 ['block6d_expand_bn[0][0]']
ivation)
block6d_dwconv (DepthwiseConv2 (None, 8, 8, 1152) 28800 ['block6d_expand_activation[0][0]
D) ']
block6d_bn (BatchNormalization (None, 8, 8, 1152) 4608 ['block6d_dwconv[0][0]']
)
block6d_activation (Activation (None, 8, 8, 1152) 0 ['block6d_bn[0][0]']
)
block6d_se_squeeze (GlobalAver (None, 1152) 0 ['block6d_activation[0][0]']
agePooling2D)
block6d_se_reshape (Reshape) (None, 1, 1, 1152) 0 ['block6d_se_squeeze[0][0]']
block6d_se_reduce (Conv2D) (None, 1, 1, 48) 55344 ['block6d_se_reshape[0][0]']
block6d_se_expand (Conv2D) (None, 1, 1, 1152) 56448 ['block6d_se_reduce[0][0]']
block6d_se_excite (Multiply) (None, 8, 8, 1152) 0 ['block6d_activation[0][0]',
'block6d_se_expand[0][0]']
block6d_project_conv (Conv2D) (None, 8, 8, 192) 221184 ['block6d_se_excite[0][0]']
block6d_project_bn (BatchNorma (None, 8, 8, 192) 768 ['block6d_project_conv[0][0]']
lization)
block6d_drop (FixedDropout) (None, 8, 8, 192) 0 ['block6d_project_bn[0][0]']
block6d_add (Add) (None, 8, 8, 192) 0 ['block6d_drop[0][0]',
'block6c_add[0][0]']
block6e_expand_conv (Conv2D) (None, 8, 8, 1152) 221184 ['block6d_add[0][0]']
block6e_expand_bn (BatchNormal (None, 8, 8, 1152) 4608 ['block6e_expand_conv[0][0]']
ization)
block6e_expand_activation (Act (None, 8, 8, 1152) 0 ['block6e_expand_bn[0][0]']
ivation)
block6e_dwconv (DepthwiseConv2 (None, 8, 8, 1152) 28800 ['block6e_expand_activation[0][0]
D) ']
block6e_bn (BatchNormalization (None, 8, 8, 1152) 4608 ['block6e_dwconv[0][0]']
)
block6e_activation (Activation (None, 8, 8, 1152) 0 ['block6e_bn[0][0]']
)
block6e_se_squeeze (GlobalAver (None, 1152) 0 ['block6e_activation[0][0]']
agePooling2D)
block6e_se_reshape (Reshape) (None, 1, 1, 1152) 0 ['block6e_se_squeeze[0][0]']
block6e_se_reduce (Conv2D) (None, 1, 1, 48) 55344 ['block6e_se_reshape[0][0]']
block6e_se_expand (Conv2D) (None, 1, 1, 1152) 56448 ['block6e_se_reduce[0][0]']
block6e_se_excite (Multiply) (None, 8, 8, 1152) 0 ['block6e_activation[0][0]',
'block6e_se_expand[0][0]']
block6e_project_conv (Conv2D) (None, 8, 8, 192) 221184 ['block6e_se_excite[0][0]']
block6e_project_bn (BatchNorma (None, 8, 8, 192) 768 ['block6e_project_conv[0][0]']
lization)
block6e_drop (FixedDropout) (None, 8, 8, 192) 0 ['block6e_project_bn[0][0]']
block6e_add (Add) (None, 8, 8, 192) 0 ['block6e_drop[0][0]',
'block6d_add[0][0]']
block7a_expand_conv (Conv2D) (None, 8, 8, 1152) 221184 ['block6e_add[0][0]']
block7a_expand_bn (BatchNormal (None, 8, 8, 1152) 4608 ['block7a_expand_conv[0][0]']
ization)
block7a_expand_activation (Act (None, 8, 8, 1152) 0 ['block7a_expand_bn[0][0]']
ivation)
block7a_dwconv (DepthwiseConv2 (None, 8, 8, 1152) 10368 ['block7a_expand_activation[0][0]
D) ']
block7a_bn (BatchNormalization (None, 8, 8, 1152) 4608 ['block7a_dwconv[0][0]']
)
block7a_activation (Activation (None, 8, 8, 1152) 0 ['block7a_bn[0][0]']
)
block7a_se_squeeze (GlobalAver (None, 1152) 0 ['block7a_activation[0][0]']
agePooling2D)
block7a_se_reshape (Reshape) (None, 1, 1, 1152) 0 ['block7a_se_squeeze[0][0]']
block7a_se_reduce (Conv2D) (None, 1, 1, 48) 55344 ['block7a_se_reshape[0][0]']
block7a_se_expand (Conv2D) (None, 1, 1, 1152) 56448 ['block7a_se_reduce[0][0]']
block7a_se_excite (Multiply) (None, 8, 8, 1152) 0 ['block7a_activation[0][0]',
'block7a_se_expand[0][0]']
block7a_project_conv (Conv2D) (None, 8, 8, 320) 368640 ['block7a_se_excite[0][0]']
block7a_project_bn (BatchNorma (None, 8, 8, 320) 1280 ['block7a_project_conv[0][0]']
lization)
block7b_expand_conv (Conv2D) (None, 8, 8, 1920) 614400 ['block7a_project_bn[0][0]']
block7b_expand_bn (BatchNormal (None, 8, 8, 1920) 7680 ['block7b_expand_conv[0][0]']
ization)
block7b_expand_activation (Act (None, 8, 8, 1920) 0 ['block7b_expand_bn[0][0]']
ivation)
block7b_dwconv (DepthwiseConv2 (None, 8, 8, 1920) 17280 ['block7b_expand_activation[0][0]
D) ']
block7b_bn (BatchNormalization (None, 8, 8, 1920) 7680 ['block7b_dwconv[0][0]']
)
block7b_activation (Activation (None, 8, 8, 1920) 0 ['block7b_bn[0][0]']
)
block7b_se_squeeze (GlobalAver (None, 1920) 0 ['block7b_activation[0][0]']
agePooling2D)
block7b_se_reshape (Reshape) (None, 1, 1, 1920) 0 ['block7b_se_squeeze[0][0]']
block7b_se_reduce (Conv2D) (None, 1, 1, 80) 153680 ['block7b_se_reshape[0][0]']
block7b_se_expand (Conv2D) (None, 1, 1, 1920) 155520 ['block7b_se_reduce[0][0]']
block7b_se_excite (Multiply) (None, 8, 8, 1920) 0 ['block7b_activation[0][0]',
'block7b_se_expand[0][0]']
block7b_project_conv (Conv2D) (None, 8, 8, 320) 614400 ['block7b_se_excite[0][0]']
block7b_project_bn (BatchNorma (None, 8, 8, 320) 1280 ['block7b_project_conv[0][0]']
lization)
block7b_drop (FixedDropout) (None, 8, 8, 320) 0 ['block7b_project_bn[0][0]']
block7b_add (Add) (None, 8, 8, 320) 0 ['block7b_drop[0][0]',
'block7a_project_bn[0][0]']
top_conv (Conv2D) (None, 8, 8, 1280) 409600 ['block7b_add[0][0]']
top_bn (BatchNormalization) (None, 8, 8, 1280) 5120 ['top_conv[0][0]']
top_activation (Activation) (None, 8, 8, 1280) 0 ['top_bn[0][0]']
flatten_1 (Flatten) (None, 81920) 0 ['top_activation[0][0]']
dense_2 (Dense) (None, 1024) 83887104 ['flatten_1[0][0]']
dropout_1 (Dropout) (None, 1024) 0 ['dense_2[0][0]']
dense_3 (Dense) (None, 1) 1025 ['dropout_1[0][0]']
==================================================================================================
Total params: 90,463,361
Trainable params: 83,888,129
Non-trainable params: 6,575,232
__________________________________________________________________________________________________
#get total parameters
model_params = model_final.count_params()
# Specify the optimizer, loss function and evaluation metrics.
model_final.compile(loss='binary_crossentropy', optimizer=tf.keras.optimizers.RMSprop(learning_rate=0.0001, weight_decay=1e-6), metrics=['accuracy'])
t1 = time.time()
#train the model
eff_history = model_final.fit_generator(train_generator, validation_data = validation_generator, steps_per_epoch = 100, epochs = 10)
fit_time = time.time() - t1
<ipython-input-47-b7f31b017b18>:3: UserWarning: `Model.fit_generator` is deprecated and will be removed in a future version. Please use `Model.fit`, which supports generators. eff_history = model_final.fit_generator(train_generator, validation_data = validation_generator, steps_per_epoch = 100, epochs = 10)
Epoch 1/10 100/100 [==============================] - 46s 376ms/step - loss: 0.3899 - accuracy: 0.9175 - val_loss: 0.0736 - val_accuracy: 0.9837 Epoch 2/10 100/100 [==============================] - 33s 331ms/step - loss: 0.3238 - accuracy: 0.9295 - val_loss: 0.0540 - val_accuracy: 0.9862 Epoch 3/10 100/100 [==============================] - 34s 335ms/step - loss: 0.3243 - accuracy: 0.9415 - val_loss: 0.0531 - val_accuracy: 0.9887 Epoch 4/10 100/100 [==============================] - 33s 334ms/step - loss: 0.2570 - accuracy: 0.9510 - val_loss: 0.0419 - val_accuracy: 0.9912 Epoch 5/10 100/100 [==============================] - 33s 330ms/step - loss: 0.2258 - accuracy: 0.9505 - val_loss: 0.0689 - val_accuracy: 0.9887 Epoch 6/10 100/100 [==============================] - 35s 350ms/step - loss: 0.2548 - accuracy: 0.9510 - val_loss: 0.0650 - val_accuracy: 0.9875 Epoch 7/10 100/100 [==============================] - 33s 330ms/step - loss: 0.2618 - accuracy: 0.9466 - val_loss: 0.0591 - val_accuracy: 0.9850 Epoch 8/10 100/100 [==============================] - 33s 329ms/step - loss: 0.2577 - accuracy: 0.9440 - val_loss: 0.0594 - val_accuracy: 0.9850 Epoch 9/10 100/100 [==============================] - 33s 328ms/step - loss: 0.2216 - accuracy: 0.9515 - val_loss: 0.0670 - val_accuracy: 0.9862 Epoch 10/10 100/100 [==============================] - 33s 328ms/step - loss: 0.2298 - accuracy: 0.9597 - val_loss: 0.0616 - val_accuracy: 0.9887
# time it took to fit the model
print(fit_time)
349.48619532585144
#Plot training and validation accuracy and loss for each epoch
acc = eff_history.history['accuracy']
val_acc = eff_history.history['val_accuracy']
loss = eff_history.history['loss']
val_loss = eff_history.history['val_loss']
epochs = range(1,len(acc) + 1)
plt.plot(epochs,acc,label = 'Training Accuracy')
plt.plot(epochs,val_acc,label = 'Validation Accuracy')
plt.title('Training and Validation Accuracy')
plt.legend()
plt.figure()
plt.plot(epochs,loss,label = 'Training loss')
plt.plot(epochs,val_loss,label = 'Validation Loss')
plt.title('Training and Validation Loss')
plt.legend()
plt.show()
# Test dataset
test_datagen = ImageDataGenerator(rescale=1./255)
test_generator = test_datagen.flow_from_directory(
test_dir,
target_size=(240, 240),
shuffle = False,
class_mode='binary',
batch_size=1)
Found 2023 images belonging to 2 classes.
#Get test length
filenames = test_generator.filenames
nb_samples = len(filenames)
#Predict on test set
predict = model_final.predict_generator(test_generator,steps = nb_samples)
<ipython-input-52-7710eff794cf>:2: UserWarning: `Model.predict_generator` is deprecated and will be removed in a future version. Please use `Model.predict`, which supports generators. predict = model_final.predict_generator(test_generator,steps = nb_samples)
#Get list of prediction results
pred_list = []
for i in predict:
if i > 0.5:
result = 1 #dog
pred_list.append(result)
else:
result = 0 #cat
pred_list.append(result)
#Create dataframe of image ID, image true label, image predicted label
import pandas as pd
image_ids = [name.split('/')[-1] for name in test_generator.filenames]
image_label = [name.split('/')[0] for name in test_generator.filenames]
data = {'id': image_ids, 'label':image_label, 'prediction':pred_list}
data_df = pd.DataFrame(data)
data_df.label.replace(('cats', 'dogs'), (0, 1), inplace=True) # change cat and dog label to 0 or 1
#Get test accuracy score
from sklearn.metrics import accuracy_score, confusion_matrix
test_accuracy = accuracy_score(data_df['label'], data_df['prediction'])
print('Test Accuracy: ', round((test_accuracy * 100), 2), "%")
Test Accuracy: 98.52 %
from sklearn.metrics import classification_report
#Classification Report
print(classification_report(data_df['label'], data_df['prediction']))
precision recall f1-score support
0 0.99 0.98 0.99 1011
1 0.98 0.99 0.99 1012
accuracy 0.99 2023
macro avg 0.99 0.99 0.99 2023
weighted avg 0.99 0.99 0.99 2023
#Create confusion matrix
import seaborn as sns
label = [0, 1] #0 = cat and 1 = dog
cm = confusion_matrix(data_df['label'], data_df['prediction'], labels = label)
#Plot
ax= plt.subplot()
sns.heatmap(cm, annot=True, fmt='g', ax=ax);
# labels, title and ticks
ax.set_xlabel('Predicted labels');ax.set_ylabel('True labels');
ax.set_title('Confusion Matrix');
ax.xaxis.set_ticklabels(["Cat", "Dog"]); ax.yaxis.set_ticklabels(["Cat", "Dog"])
[Text(0, 0.5, 'Cat'), Text(0, 1.5, 'Dog')]
ExperimentLog.loc[len(ExperimentLog)] = [
"EfficientNet B1",
240,
"RMSprop",
10,
max(acc),
max(val_acc),
test_accuracy,
fit_time,
model_params
]
ExperimentLog
| Base Model | Input Resolution | Optimizer | Epochs | Training Accuracy | Validation Accuracy | Test Accuracy | Fit Time | Total Parameters | |
|---|---|---|---|---|---|---|---|---|---|
| 0 | EfficientNet B0 | 224 | RMSprop | 10 | 0.951500 | 0.98750 | 0.971330 | 2195.619411 | 68276893 |
| 1 | EfficientNet B0 with decay | 224 | RMSprop | 10 | 0.956171 | 0.98750 | 0.974790 | 381.655278 | 68276893 |
| 2 | EfficientNet B1 | 240 | RMSprop | 10 | 0.959698 | 0.99125 | 0.985171 | 349.486195 | 90463361 |
# Add rescaling and augmentation to ImageDataGenerator for the training set
train_datagen = ImageDataGenerator(rescale = 1./255., rotation_range = 40, width_shift_range = 0.2, height_shift_range = 0.2, shear_range = 0.2, zoom_range = 0.2, horizontal_flip = True, validation_split=0.1) # set validation split
# Rescale validation set. No augmentation on the validation set.
validation_datagen = ImageDataGenerator(rescale = 1./255.,validation_split=0.1) # set validation split
#Read images directly from directory.
train_generator = train_datagen.flow_from_directory(train_dir, seed = 42, shuffle = True, batch_size = 20, class_mode = 'binary', target_size = (260, 260), subset='training') #set as training data
validation_generator = validation_datagen.flow_from_directory(train_dir, seed = 42, shuffle = True, batch_size = 20, class_mode = 'binary', target_size = (260, 260), subset='validation') # same directory as training data. Set as validation data
Found 7205 images belonging to 2 classes. Found 800 images belonging to 2 classes.
#Instantiates the EfficientNet architecture
base_model = efn.EfficientNetB2(input_shape = (260, 260, 3), include_top = False, weights = 'imagenet')
Downloading data from https://github.com/Callidior/keras-applications/releases/download/efficientnet/efficientnet-b2_weights_tf_dim_ordering_tf_kernels_autoaugment_notop.h5 31936256/31936256 [==============================] - 2s 0us/step
# Set trainable attribute to false for all of the base model layers
for layer in base_model.layers:
layer.trainable = False
#Build on top of existing base model.
x = base_model.output
x = layers.Flatten()(x) #convert to 1D array
x = layers.Dense(1024, activation="relu")(x) #fully connected layer with 1,024 hidden units and ReLU activation
x = layers.Dropout(0.5)(x) #Drops 50% of inputs to zero at each training iteration (prevents overfitting)
# Add a final sigmoid layer with 1 node for classification output (probability between 0 and 1)
predictions = layers.Dense(1, activation="sigmoid")(x)
model_final = Model(inputs = base_model.input, outputs = predictions)
#Print model summary
from torchsummary import summary
model_sum = model_final.summary()
Model: "model_2"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_3 (InputLayer) [(None, 260, 260, 3 0 []
)]
stem_conv (Conv2D) (None, 130, 130, 32 864 ['input_3[0][0]']
)
stem_bn (BatchNormalization) (None, 130, 130, 32 128 ['stem_conv[0][0]']
)
stem_activation (Activation) (None, 130, 130, 32 0 ['stem_bn[0][0]']
)
block1a_dwconv (DepthwiseConv2 (None, 130, 130, 32 288 ['stem_activation[0][0]']
D) )
block1a_bn (BatchNormalization (None, 130, 130, 32 128 ['block1a_dwconv[0][0]']
) )
block1a_activation (Activation (None, 130, 130, 32 0 ['block1a_bn[0][0]']
) )
block1a_se_squeeze (GlobalAver (None, 32) 0 ['block1a_activation[0][0]']
agePooling2D)
block1a_se_reshape (Reshape) (None, 1, 1, 32) 0 ['block1a_se_squeeze[0][0]']
block1a_se_reduce (Conv2D) (None, 1, 1, 8) 264 ['block1a_se_reshape[0][0]']
block1a_se_expand (Conv2D) (None, 1, 1, 32) 288 ['block1a_se_reduce[0][0]']
block1a_se_excite (Multiply) (None, 130, 130, 32 0 ['block1a_activation[0][0]',
) 'block1a_se_expand[0][0]']
block1a_project_conv (Conv2D) (None, 130, 130, 16 512 ['block1a_se_excite[0][0]']
)
block1a_project_bn (BatchNorma (None, 130, 130, 16 64 ['block1a_project_conv[0][0]']
lization) )
block1b_dwconv (DepthwiseConv2 (None, 130, 130, 16 144 ['block1a_project_bn[0][0]']
D) )
block1b_bn (BatchNormalization (None, 130, 130, 16 64 ['block1b_dwconv[0][0]']
) )
block1b_activation (Activation (None, 130, 130, 16 0 ['block1b_bn[0][0]']
) )
block1b_se_squeeze (GlobalAver (None, 16) 0 ['block1b_activation[0][0]']
agePooling2D)
block1b_se_reshape (Reshape) (None, 1, 1, 16) 0 ['block1b_se_squeeze[0][0]']
block1b_se_reduce (Conv2D) (None, 1, 1, 4) 68 ['block1b_se_reshape[0][0]']
block1b_se_expand (Conv2D) (None, 1, 1, 16) 80 ['block1b_se_reduce[0][0]']
block1b_se_excite (Multiply) (None, 130, 130, 16 0 ['block1b_activation[0][0]',
) 'block1b_se_expand[0][0]']
block1b_project_conv (Conv2D) (None, 130, 130, 16 256 ['block1b_se_excite[0][0]']
)
block1b_project_bn (BatchNorma (None, 130, 130, 16 64 ['block1b_project_conv[0][0]']
lization) )
block1b_drop (FixedDropout) (None, 130, 130, 16 0 ['block1b_project_bn[0][0]']
)
block1b_add (Add) (None, 130, 130, 16 0 ['block1b_drop[0][0]',
) 'block1a_project_bn[0][0]']
block2a_expand_conv (Conv2D) (None, 130, 130, 96 1536 ['block1b_add[0][0]']
)
block2a_expand_bn (BatchNormal (None, 130, 130, 96 384 ['block2a_expand_conv[0][0]']
ization) )
block2a_expand_activation (Act (None, 130, 130, 96 0 ['block2a_expand_bn[0][0]']
ivation) )
block2a_dwconv (DepthwiseConv2 (None, 65, 65, 96) 864 ['block2a_expand_activation[0][0]
D) ']
block2a_bn (BatchNormalization (None, 65, 65, 96) 384 ['block2a_dwconv[0][0]']
)
block2a_activation (Activation (None, 65, 65, 96) 0 ['block2a_bn[0][0]']
)
block2a_se_squeeze (GlobalAver (None, 96) 0 ['block2a_activation[0][0]']
agePooling2D)
block2a_se_reshape (Reshape) (None, 1, 1, 96) 0 ['block2a_se_squeeze[0][0]']
block2a_se_reduce (Conv2D) (None, 1, 1, 4) 388 ['block2a_se_reshape[0][0]']
block2a_se_expand (Conv2D) (None, 1, 1, 96) 480 ['block2a_se_reduce[0][0]']
block2a_se_excite (Multiply) (None, 65, 65, 96) 0 ['block2a_activation[0][0]',
'block2a_se_expand[0][0]']
block2a_project_conv (Conv2D) (None, 65, 65, 24) 2304 ['block2a_se_excite[0][0]']
block2a_project_bn (BatchNorma (None, 65, 65, 24) 96 ['block2a_project_conv[0][0]']
lization)
block2b_expand_conv (Conv2D) (None, 65, 65, 144) 3456 ['block2a_project_bn[0][0]']
block2b_expand_bn (BatchNormal (None, 65, 65, 144) 576 ['block2b_expand_conv[0][0]']
ization)
block2b_expand_activation (Act (None, 65, 65, 144) 0 ['block2b_expand_bn[0][0]']
ivation)
block2b_dwconv (DepthwiseConv2 (None, 65, 65, 144) 1296 ['block2b_expand_activation[0][0]
D) ']
block2b_bn (BatchNormalization (None, 65, 65, 144) 576 ['block2b_dwconv[0][0]']
)
block2b_activation (Activation (None, 65, 65, 144) 0 ['block2b_bn[0][0]']
)
block2b_se_squeeze (GlobalAver (None, 144) 0 ['block2b_activation[0][0]']
agePooling2D)
block2b_se_reshape (Reshape) (None, 1, 1, 144) 0 ['block2b_se_squeeze[0][0]']
block2b_se_reduce (Conv2D) (None, 1, 1, 6) 870 ['block2b_se_reshape[0][0]']
block2b_se_expand (Conv2D) (None, 1, 1, 144) 1008 ['block2b_se_reduce[0][0]']
block2b_se_excite (Multiply) (None, 65, 65, 144) 0 ['block2b_activation[0][0]',
'block2b_se_expand[0][0]']
block2b_project_conv (Conv2D) (None, 65, 65, 24) 3456 ['block2b_se_excite[0][0]']
block2b_project_bn (BatchNorma (None, 65, 65, 24) 96 ['block2b_project_conv[0][0]']
lization)
block2b_drop (FixedDropout) (None, 65, 65, 24) 0 ['block2b_project_bn[0][0]']
block2b_add (Add) (None, 65, 65, 24) 0 ['block2b_drop[0][0]',
'block2a_project_bn[0][0]']
block2c_expand_conv (Conv2D) (None, 65, 65, 144) 3456 ['block2b_add[0][0]']
block2c_expand_bn (BatchNormal (None, 65, 65, 144) 576 ['block2c_expand_conv[0][0]']
ization)
block2c_expand_activation (Act (None, 65, 65, 144) 0 ['block2c_expand_bn[0][0]']
ivation)
block2c_dwconv (DepthwiseConv2 (None, 65, 65, 144) 1296 ['block2c_expand_activation[0][0]
D) ']
block2c_bn (BatchNormalization (None, 65, 65, 144) 576 ['block2c_dwconv[0][0]']
)
block2c_activation (Activation (None, 65, 65, 144) 0 ['block2c_bn[0][0]']
)
block2c_se_squeeze (GlobalAver (None, 144) 0 ['block2c_activation[0][0]']
agePooling2D)
block2c_se_reshape (Reshape) (None, 1, 1, 144) 0 ['block2c_se_squeeze[0][0]']
block2c_se_reduce (Conv2D) (None, 1, 1, 6) 870 ['block2c_se_reshape[0][0]']
block2c_se_expand (Conv2D) (None, 1, 1, 144) 1008 ['block2c_se_reduce[0][0]']
block2c_se_excite (Multiply) (None, 65, 65, 144) 0 ['block2c_activation[0][0]',
'block2c_se_expand[0][0]']
block2c_project_conv (Conv2D) (None, 65, 65, 24) 3456 ['block2c_se_excite[0][0]']
block2c_project_bn (BatchNorma (None, 65, 65, 24) 96 ['block2c_project_conv[0][0]']
lization)
block2c_drop (FixedDropout) (None, 65, 65, 24) 0 ['block2c_project_bn[0][0]']
block2c_add (Add) (None, 65, 65, 24) 0 ['block2c_drop[0][0]',
'block2b_add[0][0]']
block3a_expand_conv (Conv2D) (None, 65, 65, 144) 3456 ['block2c_add[0][0]']
block3a_expand_bn (BatchNormal (None, 65, 65, 144) 576 ['block3a_expand_conv[0][0]']
ization)
block3a_expand_activation (Act (None, 65, 65, 144) 0 ['block3a_expand_bn[0][0]']
ivation)
block3a_dwconv (DepthwiseConv2 (None, 33, 33, 144) 3600 ['block3a_expand_activation[0][0]
D) ']
block3a_bn (BatchNormalization (None, 33, 33, 144) 576 ['block3a_dwconv[0][0]']
)
block3a_activation (Activation (None, 33, 33, 144) 0 ['block3a_bn[0][0]']
)
block3a_se_squeeze (GlobalAver (None, 144) 0 ['block3a_activation[0][0]']
agePooling2D)
block3a_se_reshape (Reshape) (None, 1, 1, 144) 0 ['block3a_se_squeeze[0][0]']
block3a_se_reduce (Conv2D) (None, 1, 1, 6) 870 ['block3a_se_reshape[0][0]']
block3a_se_expand (Conv2D) (None, 1, 1, 144) 1008 ['block3a_se_reduce[0][0]']
block3a_se_excite (Multiply) (None, 33, 33, 144) 0 ['block3a_activation[0][0]',
'block3a_se_expand[0][0]']
block3a_project_conv (Conv2D) (None, 33, 33, 48) 6912 ['block3a_se_excite[0][0]']
block3a_project_bn (BatchNorma (None, 33, 33, 48) 192 ['block3a_project_conv[0][0]']
lization)
block3b_expand_conv (Conv2D) (None, 33, 33, 288) 13824 ['block3a_project_bn[0][0]']
block3b_expand_bn (BatchNormal (None, 33, 33, 288) 1152 ['block3b_expand_conv[0][0]']
ization)
block3b_expand_activation (Act (None, 33, 33, 288) 0 ['block3b_expand_bn[0][0]']
ivation)
block3b_dwconv (DepthwiseConv2 (None, 33, 33, 288) 7200 ['block3b_expand_activation[0][0]
D) ']
block3b_bn (BatchNormalization (None, 33, 33, 288) 1152 ['block3b_dwconv[0][0]']
)
block3b_activation (Activation (None, 33, 33, 288) 0 ['block3b_bn[0][0]']
)
block3b_se_squeeze (GlobalAver (None, 288) 0 ['block3b_activation[0][0]']
agePooling2D)
block3b_se_reshape (Reshape) (None, 1, 1, 288) 0 ['block3b_se_squeeze[0][0]']
block3b_se_reduce (Conv2D) (None, 1, 1, 12) 3468 ['block3b_se_reshape[0][0]']
block3b_se_expand (Conv2D) (None, 1, 1, 288) 3744 ['block3b_se_reduce[0][0]']
block3b_se_excite (Multiply) (None, 33, 33, 288) 0 ['block3b_activation[0][0]',
'block3b_se_expand[0][0]']
block3b_project_conv (Conv2D) (None, 33, 33, 48) 13824 ['block3b_se_excite[0][0]']
block3b_project_bn (BatchNorma (None, 33, 33, 48) 192 ['block3b_project_conv[0][0]']
lization)
block3b_drop (FixedDropout) (None, 33, 33, 48) 0 ['block3b_project_bn[0][0]']
block3b_add (Add) (None, 33, 33, 48) 0 ['block3b_drop[0][0]',
'block3a_project_bn[0][0]']
block3c_expand_conv (Conv2D) (None, 33, 33, 288) 13824 ['block3b_add[0][0]']
block3c_expand_bn (BatchNormal (None, 33, 33, 288) 1152 ['block3c_expand_conv[0][0]']
ization)
block3c_expand_activation (Act (None, 33, 33, 288) 0 ['block3c_expand_bn[0][0]']
ivation)
block3c_dwconv (DepthwiseConv2 (None, 33, 33, 288) 7200 ['block3c_expand_activation[0][0]
D) ']
block3c_bn (BatchNormalization (None, 33, 33, 288) 1152 ['block3c_dwconv[0][0]']
)
block3c_activation (Activation (None, 33, 33, 288) 0 ['block3c_bn[0][0]']
)
block3c_se_squeeze (GlobalAver (None, 288) 0 ['block3c_activation[0][0]']
agePooling2D)
block3c_se_reshape (Reshape) (None, 1, 1, 288) 0 ['block3c_se_squeeze[0][0]']
block3c_se_reduce (Conv2D) (None, 1, 1, 12) 3468 ['block3c_se_reshape[0][0]']
block3c_se_expand (Conv2D) (None, 1, 1, 288) 3744 ['block3c_se_reduce[0][0]']
block3c_se_excite (Multiply) (None, 33, 33, 288) 0 ['block3c_activation[0][0]',
'block3c_se_expand[0][0]']
block3c_project_conv (Conv2D) (None, 33, 33, 48) 13824 ['block3c_se_excite[0][0]']
block3c_project_bn (BatchNorma (None, 33, 33, 48) 192 ['block3c_project_conv[0][0]']
lization)
block3c_drop (FixedDropout) (None, 33, 33, 48) 0 ['block3c_project_bn[0][0]']
block3c_add (Add) (None, 33, 33, 48) 0 ['block3c_drop[0][0]',
'block3b_add[0][0]']
block4a_expand_conv (Conv2D) (None, 33, 33, 288) 13824 ['block3c_add[0][0]']
block4a_expand_bn (BatchNormal (None, 33, 33, 288) 1152 ['block4a_expand_conv[0][0]']
ization)
block4a_expand_activation (Act (None, 33, 33, 288) 0 ['block4a_expand_bn[0][0]']
ivation)
block4a_dwconv (DepthwiseConv2 (None, 17, 17, 288) 2592 ['block4a_expand_activation[0][0]
D) ']
block4a_bn (BatchNormalization (None, 17, 17, 288) 1152 ['block4a_dwconv[0][0]']
)
block4a_activation (Activation (None, 17, 17, 288) 0 ['block4a_bn[0][0]']
)
block4a_se_squeeze (GlobalAver (None, 288) 0 ['block4a_activation[0][0]']
agePooling2D)
block4a_se_reshape (Reshape) (None, 1, 1, 288) 0 ['block4a_se_squeeze[0][0]']
block4a_se_reduce (Conv2D) (None, 1, 1, 12) 3468 ['block4a_se_reshape[0][0]']
block4a_se_expand (Conv2D) (None, 1, 1, 288) 3744 ['block4a_se_reduce[0][0]']
block4a_se_excite (Multiply) (None, 17, 17, 288) 0 ['block4a_activation[0][0]',
'block4a_se_expand[0][0]']
block4a_project_conv (Conv2D) (None, 17, 17, 88) 25344 ['block4a_se_excite[0][0]']
block4a_project_bn (BatchNorma (None, 17, 17, 88) 352 ['block4a_project_conv[0][0]']
lization)
block4b_expand_conv (Conv2D) (None, 17, 17, 528) 46464 ['block4a_project_bn[0][0]']
block4b_expand_bn (BatchNormal (None, 17, 17, 528) 2112 ['block4b_expand_conv[0][0]']
ization)
block4b_expand_activation (Act (None, 17, 17, 528) 0 ['block4b_expand_bn[0][0]']
ivation)
block4b_dwconv (DepthwiseConv2 (None, 17, 17, 528) 4752 ['block4b_expand_activation[0][0]
D) ']
block4b_bn (BatchNormalization (None, 17, 17, 528) 2112 ['block4b_dwconv[0][0]']
)
block4b_activation (Activation (None, 17, 17, 528) 0 ['block4b_bn[0][0]']
)
block4b_se_squeeze (GlobalAver (None, 528) 0 ['block4b_activation[0][0]']
agePooling2D)
block4b_se_reshape (Reshape) (None, 1, 1, 528) 0 ['block4b_se_squeeze[0][0]']
block4b_se_reduce (Conv2D) (None, 1, 1, 22) 11638 ['block4b_se_reshape[0][0]']
block4b_se_expand (Conv2D) (None, 1, 1, 528) 12144 ['block4b_se_reduce[0][0]']
block4b_se_excite (Multiply) (None, 17, 17, 528) 0 ['block4b_activation[0][0]',
'block4b_se_expand[0][0]']
block4b_project_conv (Conv2D) (None, 17, 17, 88) 46464 ['block4b_se_excite[0][0]']
block4b_project_bn (BatchNorma (None, 17, 17, 88) 352 ['block4b_project_conv[0][0]']
lization)
block4b_drop (FixedDropout) (None, 17, 17, 88) 0 ['block4b_project_bn[0][0]']
block4b_add (Add) (None, 17, 17, 88) 0 ['block4b_drop[0][0]',
'block4a_project_bn[0][0]']
block4c_expand_conv (Conv2D) (None, 17, 17, 528) 46464 ['block4b_add[0][0]']
block4c_expand_bn (BatchNormal (None, 17, 17, 528) 2112 ['block4c_expand_conv[0][0]']
ization)
block4c_expand_activation (Act (None, 17, 17, 528) 0 ['block4c_expand_bn[0][0]']
ivation)
block4c_dwconv (DepthwiseConv2 (None, 17, 17, 528) 4752 ['block4c_expand_activation[0][0]
D) ']
block4c_bn (BatchNormalization (None, 17, 17, 528) 2112 ['block4c_dwconv[0][0]']
)
block4c_activation (Activation (None, 17, 17, 528) 0 ['block4c_bn[0][0]']
)
block4c_se_squeeze (GlobalAver (None, 528) 0 ['block4c_activation[0][0]']
agePooling2D)
block4c_se_reshape (Reshape) (None, 1, 1, 528) 0 ['block4c_se_squeeze[0][0]']
block4c_se_reduce (Conv2D) (None, 1, 1, 22) 11638 ['block4c_se_reshape[0][0]']
block4c_se_expand (Conv2D) (None, 1, 1, 528) 12144 ['block4c_se_reduce[0][0]']
block4c_se_excite (Multiply) (None, 17, 17, 528) 0 ['block4c_activation[0][0]',
'block4c_se_expand[0][0]']
block4c_project_conv (Conv2D) (None, 17, 17, 88) 46464 ['block4c_se_excite[0][0]']
block4c_project_bn (BatchNorma (None, 17, 17, 88) 352 ['block4c_project_conv[0][0]']
lization)
block4c_drop (FixedDropout) (None, 17, 17, 88) 0 ['block4c_project_bn[0][0]']
block4c_add (Add) (None, 17, 17, 88) 0 ['block4c_drop[0][0]',
'block4b_add[0][0]']
block4d_expand_conv (Conv2D) (None, 17, 17, 528) 46464 ['block4c_add[0][0]']
block4d_expand_bn (BatchNormal (None, 17, 17, 528) 2112 ['block4d_expand_conv[0][0]']
ization)
block4d_expand_activation (Act (None, 17, 17, 528) 0 ['block4d_expand_bn[0][0]']
ivation)
block4d_dwconv (DepthwiseConv2 (None, 17, 17, 528) 4752 ['block4d_expand_activation[0][0]
D) ']
block4d_bn (BatchNormalization (None, 17, 17, 528) 2112 ['block4d_dwconv[0][0]']
)
block4d_activation (Activation (None, 17, 17, 528) 0 ['block4d_bn[0][0]']
)
block4d_se_squeeze (GlobalAver (None, 528) 0 ['block4d_activation[0][0]']
agePooling2D)
block4d_se_reshape (Reshape) (None, 1, 1, 528) 0 ['block4d_se_squeeze[0][0]']
block4d_se_reduce (Conv2D) (None, 1, 1, 22) 11638 ['block4d_se_reshape[0][0]']
block4d_se_expand (Conv2D) (None, 1, 1, 528) 12144 ['block4d_se_reduce[0][0]']
block4d_se_excite (Multiply) (None, 17, 17, 528) 0 ['block4d_activation[0][0]',
'block4d_se_expand[0][0]']
block4d_project_conv (Conv2D) (None, 17, 17, 88) 46464 ['block4d_se_excite[0][0]']
block4d_project_bn (BatchNorma (None, 17, 17, 88) 352 ['block4d_project_conv[0][0]']
lization)
block4d_drop (FixedDropout) (None, 17, 17, 88) 0 ['block4d_project_bn[0][0]']
block4d_add (Add) (None, 17, 17, 88) 0 ['block4d_drop[0][0]',
'block4c_add[0][0]']
block5a_expand_conv (Conv2D) (None, 17, 17, 528) 46464 ['block4d_add[0][0]']
block5a_expand_bn (BatchNormal (None, 17, 17, 528) 2112 ['block5a_expand_conv[0][0]']
ization)
block5a_expand_activation (Act (None, 17, 17, 528) 0 ['block5a_expand_bn[0][0]']
ivation)
block5a_dwconv (DepthwiseConv2 (None, 17, 17, 528) 13200 ['block5a_expand_activation[0][0]
D) ']
block5a_bn (BatchNormalization (None, 17, 17, 528) 2112 ['block5a_dwconv[0][0]']
)
block5a_activation (Activation (None, 17, 17, 528) 0 ['block5a_bn[0][0]']
)
block5a_se_squeeze (GlobalAver (None, 528) 0 ['block5a_activation[0][0]']
agePooling2D)
block5a_se_reshape (Reshape) (None, 1, 1, 528) 0 ['block5a_se_squeeze[0][0]']
block5a_se_reduce (Conv2D) (None, 1, 1, 22) 11638 ['block5a_se_reshape[0][0]']
block5a_se_expand (Conv2D) (None, 1, 1, 528) 12144 ['block5a_se_reduce[0][0]']
block5a_se_excite (Multiply) (None, 17, 17, 528) 0 ['block5a_activation[0][0]',
'block5a_se_expand[0][0]']
block5a_project_conv (Conv2D) (None, 17, 17, 120) 63360 ['block5a_se_excite[0][0]']
block5a_project_bn (BatchNorma (None, 17, 17, 120) 480 ['block5a_project_conv[0][0]']
lization)
block5b_expand_conv (Conv2D) (None, 17, 17, 720) 86400 ['block5a_project_bn[0][0]']
block5b_expand_bn (BatchNormal (None, 17, 17, 720) 2880 ['block5b_expand_conv[0][0]']
ization)
block5b_expand_activation (Act (None, 17, 17, 720) 0 ['block5b_expand_bn[0][0]']
ivation)
block5b_dwconv (DepthwiseConv2 (None, 17, 17, 720) 18000 ['block5b_expand_activation[0][0]
D) ']
block5b_bn (BatchNormalization (None, 17, 17, 720) 2880 ['block5b_dwconv[0][0]']
)
block5b_activation (Activation (None, 17, 17, 720) 0 ['block5b_bn[0][0]']
)
block5b_se_squeeze (GlobalAver (None, 720) 0 ['block5b_activation[0][0]']
agePooling2D)
block5b_se_reshape (Reshape) (None, 1, 1, 720) 0 ['block5b_se_squeeze[0][0]']
block5b_se_reduce (Conv2D) (None, 1, 1, 30) 21630 ['block5b_se_reshape[0][0]']
block5b_se_expand (Conv2D) (None, 1, 1, 720) 22320 ['block5b_se_reduce[0][0]']
block5b_se_excite (Multiply) (None, 17, 17, 720) 0 ['block5b_activation[0][0]',
'block5b_se_expand[0][0]']
block5b_project_conv (Conv2D) (None, 17, 17, 120) 86400 ['block5b_se_excite[0][0]']
block5b_project_bn (BatchNorma (None, 17, 17, 120) 480 ['block5b_project_conv[0][0]']
lization)
block5b_drop (FixedDropout) (None, 17, 17, 120) 0 ['block5b_project_bn[0][0]']
block5b_add (Add) (None, 17, 17, 120) 0 ['block5b_drop[0][0]',
'block5a_project_bn[0][0]']
block5c_expand_conv (Conv2D) (None, 17, 17, 720) 86400 ['block5b_add[0][0]']
block5c_expand_bn (BatchNormal (None, 17, 17, 720) 2880 ['block5c_expand_conv[0][0]']
ization)
block5c_expand_activation (Act (None, 17, 17, 720) 0 ['block5c_expand_bn[0][0]']
ivation)
block5c_dwconv (DepthwiseConv2 (None, 17, 17, 720) 18000 ['block5c_expand_activation[0][0]
D) ']
block5c_bn (BatchNormalization (None, 17, 17, 720) 2880 ['block5c_dwconv[0][0]']
)
block5c_activation (Activation (None, 17, 17, 720) 0 ['block5c_bn[0][0]']
)
block5c_se_squeeze (GlobalAver (None, 720) 0 ['block5c_activation[0][0]']
agePooling2D)
block5c_se_reshape (Reshape) (None, 1, 1, 720) 0 ['block5c_se_squeeze[0][0]']
block5c_se_reduce (Conv2D) (None, 1, 1, 30) 21630 ['block5c_se_reshape[0][0]']
block5c_se_expand (Conv2D) (None, 1, 1, 720) 22320 ['block5c_se_reduce[0][0]']
block5c_se_excite (Multiply) (None, 17, 17, 720) 0 ['block5c_activation[0][0]',
'block5c_se_expand[0][0]']
block5c_project_conv (Conv2D) (None, 17, 17, 120) 86400 ['block5c_se_excite[0][0]']
block5c_project_bn (BatchNorma (None, 17, 17, 120) 480 ['block5c_project_conv[0][0]']
lization)
block5c_drop (FixedDropout) (None, 17, 17, 120) 0 ['block5c_project_bn[0][0]']
block5c_add (Add) (None, 17, 17, 120) 0 ['block5c_drop[0][0]',
'block5b_add[0][0]']
block5d_expand_conv (Conv2D) (None, 17, 17, 720) 86400 ['block5c_add[0][0]']
block5d_expand_bn (BatchNormal (None, 17, 17, 720) 2880 ['block5d_expand_conv[0][0]']
ization)
block5d_expand_activation (Act (None, 17, 17, 720) 0 ['block5d_expand_bn[0][0]']
ivation)
block5d_dwconv (DepthwiseConv2 (None, 17, 17, 720) 18000 ['block5d_expand_activation[0][0]
D) ']
block5d_bn (BatchNormalization (None, 17, 17, 720) 2880 ['block5d_dwconv[0][0]']
)
block5d_activation (Activation (None, 17, 17, 720) 0 ['block5d_bn[0][0]']
)
block5d_se_squeeze (GlobalAver (None, 720) 0 ['block5d_activation[0][0]']
agePooling2D)
block5d_se_reshape (Reshape) (None, 1, 1, 720) 0 ['block5d_se_squeeze[0][0]']
block5d_se_reduce (Conv2D) (None, 1, 1, 30) 21630 ['block5d_se_reshape[0][0]']
block5d_se_expand (Conv2D) (None, 1, 1, 720) 22320 ['block5d_se_reduce[0][0]']
block5d_se_excite (Multiply) (None, 17, 17, 720) 0 ['block5d_activation[0][0]',
'block5d_se_expand[0][0]']
block5d_project_conv (Conv2D) (None, 17, 17, 120) 86400 ['block5d_se_excite[0][0]']
block5d_project_bn (BatchNorma (None, 17, 17, 120) 480 ['block5d_project_conv[0][0]']
lization)
block5d_drop (FixedDropout) (None, 17, 17, 120) 0 ['block5d_project_bn[0][0]']
block5d_add (Add) (None, 17, 17, 120) 0 ['block5d_drop[0][0]',
'block5c_add[0][0]']
block6a_expand_conv (Conv2D) (None, 17, 17, 720) 86400 ['block5d_add[0][0]']
block6a_expand_bn (BatchNormal (None, 17, 17, 720) 2880 ['block6a_expand_conv[0][0]']
ization)
block6a_expand_activation (Act (None, 17, 17, 720) 0 ['block6a_expand_bn[0][0]']
ivation)
block6a_dwconv (DepthwiseConv2 (None, 9, 9, 720) 18000 ['block6a_expand_activation[0][0]
D) ']
block6a_bn (BatchNormalization (None, 9, 9, 720) 2880 ['block6a_dwconv[0][0]']
)
block6a_activation (Activation (None, 9, 9, 720) 0 ['block6a_bn[0][0]']
)
block6a_se_squeeze (GlobalAver (None, 720) 0 ['block6a_activation[0][0]']
agePooling2D)
block6a_se_reshape (Reshape) (None, 1, 1, 720) 0 ['block6a_se_squeeze[0][0]']
block6a_se_reduce (Conv2D) (None, 1, 1, 30) 21630 ['block6a_se_reshape[0][0]']
block6a_se_expand (Conv2D) (None, 1, 1, 720) 22320 ['block6a_se_reduce[0][0]']
block6a_se_excite (Multiply) (None, 9, 9, 720) 0 ['block6a_activation[0][0]',
'block6a_se_expand[0][0]']
block6a_project_conv (Conv2D) (None, 9, 9, 208) 149760 ['block6a_se_excite[0][0]']
block6a_project_bn (BatchNorma (None, 9, 9, 208) 832 ['block6a_project_conv[0][0]']
lization)
block6b_expand_conv (Conv2D) (None, 9, 9, 1248) 259584 ['block6a_project_bn[0][0]']
block6b_expand_bn (BatchNormal (None, 9, 9, 1248) 4992 ['block6b_expand_conv[0][0]']
ization)
block6b_expand_activation (Act (None, 9, 9, 1248) 0 ['block6b_expand_bn[0][0]']
ivation)
block6b_dwconv (DepthwiseConv2 (None, 9, 9, 1248) 31200 ['block6b_expand_activation[0][0]
D) ']
block6b_bn (BatchNormalization (None, 9, 9, 1248) 4992 ['block6b_dwconv[0][0]']
)
block6b_activation (Activation (None, 9, 9, 1248) 0 ['block6b_bn[0][0]']
)
block6b_se_squeeze (GlobalAver (None, 1248) 0 ['block6b_activation[0][0]']
agePooling2D)
block6b_se_reshape (Reshape) (None, 1, 1, 1248) 0 ['block6b_se_squeeze[0][0]']
block6b_se_reduce (Conv2D) (None, 1, 1, 52) 64948 ['block6b_se_reshape[0][0]']
block6b_se_expand (Conv2D) (None, 1, 1, 1248) 66144 ['block6b_se_reduce[0][0]']
block6b_se_excite (Multiply) (None, 9, 9, 1248) 0 ['block6b_activation[0][0]',
'block6b_se_expand[0][0]']
block6b_project_conv (Conv2D) (None, 9, 9, 208) 259584 ['block6b_se_excite[0][0]']
block6b_project_bn (BatchNorma (None, 9, 9, 208) 832 ['block6b_project_conv[0][0]']
lization)
block6b_drop (FixedDropout) (None, 9, 9, 208) 0 ['block6b_project_bn[0][0]']
block6b_add (Add) (None, 9, 9, 208) 0 ['block6b_drop[0][0]',
'block6a_project_bn[0][0]']
block6c_expand_conv (Conv2D) (None, 9, 9, 1248) 259584 ['block6b_add[0][0]']
block6c_expand_bn (BatchNormal (None, 9, 9, 1248) 4992 ['block6c_expand_conv[0][0]']
ization)
block6c_expand_activation (Act (None, 9, 9, 1248) 0 ['block6c_expand_bn[0][0]']
ivation)
block6c_dwconv (DepthwiseConv2 (None, 9, 9, 1248) 31200 ['block6c_expand_activation[0][0]
D) ']
block6c_bn (BatchNormalization (None, 9, 9, 1248) 4992 ['block6c_dwconv[0][0]']
)
block6c_activation (Activation (None, 9, 9, 1248) 0 ['block6c_bn[0][0]']
)
block6c_se_squeeze (GlobalAver (None, 1248) 0 ['block6c_activation[0][0]']
agePooling2D)
block6c_se_reshape (Reshape) (None, 1, 1, 1248) 0 ['block6c_se_squeeze[0][0]']
block6c_se_reduce (Conv2D) (None, 1, 1, 52) 64948 ['block6c_se_reshape[0][0]']
block6c_se_expand (Conv2D) (None, 1, 1, 1248) 66144 ['block6c_se_reduce[0][0]']
block6c_se_excite (Multiply) (None, 9, 9, 1248) 0 ['block6c_activation[0][0]',
'block6c_se_expand[0][0]']
block6c_project_conv (Conv2D) (None, 9, 9, 208) 259584 ['block6c_se_excite[0][0]']
block6c_project_bn (BatchNorma (None, 9, 9, 208) 832 ['block6c_project_conv[0][0]']
lization)
block6c_drop (FixedDropout) (None, 9, 9, 208) 0 ['block6c_project_bn[0][0]']
block6c_add (Add) (None, 9, 9, 208) 0 ['block6c_drop[0][0]',
'block6b_add[0][0]']
block6d_expand_conv (Conv2D) (None, 9, 9, 1248) 259584 ['block6c_add[0][0]']
block6d_expand_bn (BatchNormal (None, 9, 9, 1248) 4992 ['block6d_expand_conv[0][0]']
ization)
block6d_expand_activation (Act (None, 9, 9, 1248) 0 ['block6d_expand_bn[0][0]']
ivation)
block6d_dwconv (DepthwiseConv2 (None, 9, 9, 1248) 31200 ['block6d_expand_activation[0][0]
D) ']
block6d_bn (BatchNormalization (None, 9, 9, 1248) 4992 ['block6d_dwconv[0][0]']
)
block6d_activation (Activation (None, 9, 9, 1248) 0 ['block6d_bn[0][0]']
)
block6d_se_squeeze (GlobalAver (None, 1248) 0 ['block6d_activation[0][0]']
agePooling2D)
block6d_se_reshape (Reshape) (None, 1, 1, 1248) 0 ['block6d_se_squeeze[0][0]']
block6d_se_reduce (Conv2D) (None, 1, 1, 52) 64948 ['block6d_se_reshape[0][0]']
block6d_se_expand (Conv2D) (None, 1, 1, 1248) 66144 ['block6d_se_reduce[0][0]']
block6d_se_excite (Multiply) (None, 9, 9, 1248) 0 ['block6d_activation[0][0]',
'block6d_se_expand[0][0]']
block6d_project_conv (Conv2D) (None, 9, 9, 208) 259584 ['block6d_se_excite[0][0]']
block6d_project_bn (BatchNorma (None, 9, 9, 208) 832 ['block6d_project_conv[0][0]']
lization)
block6d_drop (FixedDropout) (None, 9, 9, 208) 0 ['block6d_project_bn[0][0]']
block6d_add (Add) (None, 9, 9, 208) 0 ['block6d_drop[0][0]',
'block6c_add[0][0]']
block6e_expand_conv (Conv2D) (None, 9, 9, 1248) 259584 ['block6d_add[0][0]']
block6e_expand_bn (BatchNormal (None, 9, 9, 1248) 4992 ['block6e_expand_conv[0][0]']
ization)
block6e_expand_activation (Act (None, 9, 9, 1248) 0 ['block6e_expand_bn[0][0]']
ivation)
block6e_dwconv (DepthwiseConv2 (None, 9, 9, 1248) 31200 ['block6e_expand_activation[0][0]
D) ']
block6e_bn (BatchNormalization (None, 9, 9, 1248) 4992 ['block6e_dwconv[0][0]']
)
block6e_activation (Activation (None, 9, 9, 1248) 0 ['block6e_bn[0][0]']
)
block6e_se_squeeze (GlobalAver (None, 1248) 0 ['block6e_activation[0][0]']
agePooling2D)
block6e_se_reshape (Reshape) (None, 1, 1, 1248) 0 ['block6e_se_squeeze[0][0]']
block6e_se_reduce (Conv2D) (None, 1, 1, 52) 64948 ['block6e_se_reshape[0][0]']
block6e_se_expand (Conv2D) (None, 1, 1, 1248) 66144 ['block6e_se_reduce[0][0]']
block6e_se_excite (Multiply) (None, 9, 9, 1248) 0 ['block6e_activation[0][0]',
'block6e_se_expand[0][0]']
block6e_project_conv (Conv2D) (None, 9, 9, 208) 259584 ['block6e_se_excite[0][0]']
block6e_project_bn (BatchNorma (None, 9, 9, 208) 832 ['block6e_project_conv[0][0]']
lization)
block6e_drop (FixedDropout) (None, 9, 9, 208) 0 ['block6e_project_bn[0][0]']
block6e_add (Add) (None, 9, 9, 208) 0 ['block6e_drop[0][0]',
'block6d_add[0][0]']
block7a_expand_conv (Conv2D) (None, 9, 9, 1248) 259584 ['block6e_add[0][0]']
block7a_expand_bn (BatchNormal (None, 9, 9, 1248) 4992 ['block7a_expand_conv[0][0]']
ization)
block7a_expand_activation (Act (None, 9, 9, 1248) 0 ['block7a_expand_bn[0][0]']
ivation)
block7a_dwconv (DepthwiseConv2 (None, 9, 9, 1248) 11232 ['block7a_expand_activation[0][0]
D) ']
block7a_bn (BatchNormalization (None, 9, 9, 1248) 4992 ['block7a_dwconv[0][0]']
)
block7a_activation (Activation (None, 9, 9, 1248) 0 ['block7a_bn[0][0]']
)
block7a_se_squeeze (GlobalAver (None, 1248) 0 ['block7a_activation[0][0]']
agePooling2D)
block7a_se_reshape (Reshape) (None, 1, 1, 1248) 0 ['block7a_se_squeeze[0][0]']
block7a_se_reduce (Conv2D) (None, 1, 1, 52) 64948 ['block7a_se_reshape[0][0]']
block7a_se_expand (Conv2D) (None, 1, 1, 1248) 66144 ['block7a_se_reduce[0][0]']
block7a_se_excite (Multiply) (None, 9, 9, 1248) 0 ['block7a_activation[0][0]',
'block7a_se_expand[0][0]']
block7a_project_conv (Conv2D) (None, 9, 9, 352) 439296 ['block7a_se_excite[0][0]']
block7a_project_bn (BatchNorma (None, 9, 9, 352) 1408 ['block7a_project_conv[0][0]']
lization)
block7b_expand_conv (Conv2D) (None, 9, 9, 2112) 743424 ['block7a_project_bn[0][0]']
block7b_expand_bn (BatchNormal (None, 9, 9, 2112) 8448 ['block7b_expand_conv[0][0]']
ization)
block7b_expand_activation (Act (None, 9, 9, 2112) 0 ['block7b_expand_bn[0][0]']
ivation)
block7b_dwconv (DepthwiseConv2 (None, 9, 9, 2112) 19008 ['block7b_expand_activation[0][0]
D) ']
block7b_bn (BatchNormalization (None, 9, 9, 2112) 8448 ['block7b_dwconv[0][0]']
)
block7b_activation (Activation (None, 9, 9, 2112) 0 ['block7b_bn[0][0]']
)
block7b_se_squeeze (GlobalAver (None, 2112) 0 ['block7b_activation[0][0]']
agePooling2D)
block7b_se_reshape (Reshape) (None, 1, 1, 2112) 0 ['block7b_se_squeeze[0][0]']
block7b_se_reduce (Conv2D) (None, 1, 1, 88) 185944 ['block7b_se_reshape[0][0]']
block7b_se_expand (Conv2D) (None, 1, 1, 2112) 187968 ['block7b_se_reduce[0][0]']
block7b_se_excite (Multiply) (None, 9, 9, 2112) 0 ['block7b_activation[0][0]',
'block7b_se_expand[0][0]']
block7b_project_conv (Conv2D) (None, 9, 9, 352) 743424 ['block7b_se_excite[0][0]']
block7b_project_bn (BatchNorma (None, 9, 9, 352) 1408 ['block7b_project_conv[0][0]']
lization)
block7b_drop (FixedDropout) (None, 9, 9, 352) 0 ['block7b_project_bn[0][0]']
block7b_add (Add) (None, 9, 9, 352) 0 ['block7b_drop[0][0]',
'block7a_project_bn[0][0]']
top_conv (Conv2D) (None, 9, 9, 1408) 495616 ['block7b_add[0][0]']
top_bn (BatchNormalization) (None, 9, 9, 1408) 5632 ['top_conv[0][0]']
top_activation (Activation) (None, 9, 9, 1408) 0 ['top_bn[0][0]']
flatten_2 (Flatten) (None, 114048) 0 ['top_activation[0][0]']
dense_4 (Dense) (None, 1024) 116786176 ['flatten_2[0][0]']
dropout_2 (Dropout) (None, 1024) 0 ['dense_4[0][0]']
dense_5 (Dense) (None, 1) 1025 ['dropout_2[0][0]']
==================================================================================================
Total params: 124,555,763
Trainable params: 116,787,201
Non-trainable params: 7,768,562
__________________________________________________________________________________________________
#get total parameters
model_params = model_final.count_params()
# Specify the optimizer, loss function and evaluation metrics.
model_final.compile(loss='binary_crossentropy', optimizer=tf.keras.optimizers.RMSprop(learning_rate=0.0001, weight_decay=1e-6), metrics=['accuracy'])
t1 = time.time()
#train the model
eff_history = model_final.fit_generator(train_generator, validation_data = validation_generator, steps_per_epoch = 100, epochs = 10)
fit_time = time.time() - t1
<ipython-input-66-b7f31b017b18>:3: UserWarning: `Model.fit_generator` is deprecated and will be removed in a future version. Please use `Model.fit`, which supports generators. eff_history = model_final.fit_generator(train_generator, validation_data = validation_generator, steps_per_epoch = 100, epochs = 10)
Epoch 1/10 100/100 [==============================] - 51s 419ms/step - loss: 0.4875 - accuracy: 0.9150 - val_loss: 0.0565 - val_accuracy: 0.9862 Epoch 2/10 100/100 [==============================] - 37s 366ms/step - loss: 0.2917 - accuracy: 0.9500 - val_loss: 0.0733 - val_accuracy: 0.9850 Epoch 3/10 100/100 [==============================] - 37s 365ms/step - loss: 0.2712 - accuracy: 0.9570 - val_loss: 0.0248 - val_accuracy: 0.9862 Epoch 4/10 100/100 [==============================] - 37s 366ms/step - loss: 0.2456 - accuracy: 0.9500 - val_loss: 0.0257 - val_accuracy: 0.9937 Epoch 5/10 100/100 [==============================] - 37s 365ms/step - loss: 0.2565 - accuracy: 0.9580 - val_loss: 0.0160 - val_accuracy: 0.9950 Epoch 6/10 100/100 [==============================] - 37s 366ms/step - loss: 0.2023 - accuracy: 0.9552 - val_loss: 0.0272 - val_accuracy: 0.9900 Epoch 7/10 100/100 [==============================] - 37s 367ms/step - loss: 0.1980 - accuracy: 0.9585 - val_loss: 0.0496 - val_accuracy: 0.9887 Epoch 8/10 100/100 [==============================] - 36s 363ms/step - loss: 0.1797 - accuracy: 0.9652 - val_loss: 0.0244 - val_accuracy: 0.9912 Epoch 9/10 100/100 [==============================] - 36s 362ms/step - loss: 0.2189 - accuracy: 0.9607 - val_loss: 0.0274 - val_accuracy: 0.9912 Epoch 10/10 100/100 [==============================] - 36s 363ms/step - loss: 0.2232 - accuracy: 0.9565 - val_loss: 0.0339 - val_accuracy: 0.9937
# time it took to fit the model
print(fit_time)
380.2059442996979
#Plot training and validation accuracy and loss for each epoch
acc = eff_history.history['accuracy']
val_acc = eff_history.history['val_accuracy']
loss = eff_history.history['loss']
val_loss = eff_history.history['val_loss']
epochs = range(1,len(acc) + 1)
plt.plot(epochs,acc,label = 'Training Accuracy')
plt.plot(epochs,val_acc,label = 'Validation Accuracy')
plt.title('Training and Validation Accuracy')
plt.legend()
plt.figure()
plt.plot(epochs,loss,label = 'Training loss')
plt.plot(epochs,val_loss,label = 'Validation Loss')
plt.title('Training and Validation Loss')
plt.legend()
plt.show()
# Test dataset
test_datagen = ImageDataGenerator(rescale=1./255)
test_generator = test_datagen.flow_from_directory(
test_dir,
target_size=(260, 260),
shuffle = False,
class_mode='binary',
batch_size=1)
Found 2023 images belonging to 2 classes.
#Get test length
filenames = test_generator.filenames
nb_samples = len(filenames)
#Predict on test set
predict = model_final.predict_generator(test_generator,steps = nb_samples)
<ipython-input-71-7710eff794cf>:2: UserWarning: `Model.predict_generator` is deprecated and will be removed in a future version. Please use `Model.predict`, which supports generators. predict = model_final.predict_generator(test_generator,steps = nb_samples)
#Get list of prediction results
pred_list = []
for i in predict:
if i > 0.5:
result = 1 #dog
pred_list.append(result)
else:
result = 0 #cat
pred_list.append(result)
#Create dataframe of image ID, image true label, image predicted label
import pandas as pd
image_ids = [name.split('/')[-1] for name in test_generator.filenames]
image_label = [name.split('/')[0] for name in test_generator.filenames]
data = {'id': image_ids, 'label':image_label, 'prediction':pred_list}
data_df = pd.DataFrame(data)
data_df.label.replace(('cats', 'dogs'), (0, 1), inplace=True) # change cat and dog label to 0 or 1
#Get test accuracy score
from sklearn.metrics import accuracy_score, confusion_matrix
test_accuracy = accuracy_score(data_df['label'], data_df['prediction'])
print('Test Accuracy: ', round((test_accuracy * 100), 2), "%")
Test Accuracy: 98.32 %
from sklearn.metrics import classification_report
#Classification Report
print(classification_report(data_df['label'], data_df['prediction']))
precision recall f1-score support
0 0.99 0.97 0.98 1011
1 0.97 0.99 0.98 1012
accuracy 0.98 2023
macro avg 0.98 0.98 0.98 2023
weighted avg 0.98 0.98 0.98 2023
#Create confusion matrix
import seaborn as sns
label = [0, 1] #0 = cat and 1 = dog
cm = confusion_matrix(data_df['label'], data_df['prediction'], labels = label)
#Plot
ax= plt.subplot()
sns.heatmap(cm, annot=True, fmt='g', ax=ax);
# labels, title and ticks
ax.set_xlabel('Predicted labels');ax.set_ylabel('True labels');
ax.set_title('Confusion Matrix');
ax.xaxis.set_ticklabels(["Cat", "Dog"]); ax.yaxis.set_ticklabels(["Cat", "Dog"])
[Text(0, 0.5, 'Cat'), Text(0, 1.5, 'Dog')]
ExperimentLog.loc[len(ExperimentLog)] = [
"EfficientNet B2",
260,
"RMSprop",
10,
max(acc),
max(val_acc),
test_accuracy,
fit_time,
model_params
]
ExperimentLog
| Base Model | Input Resolution | Optimizer | Epochs | Training Accuracy | Validation Accuracy | Test Accuracy | Fit Time | Total Parameters | |
|---|---|---|---|---|---|---|---|---|---|
| 0 | EfficientNet B0 | 224 | RMSprop | 10 | 0.951500 | 0.98750 | 0.971330 | 2195.619411 | 68276893 |
| 1 | EfficientNet B0 with decay | 224 | RMSprop | 10 | 0.956171 | 0.98750 | 0.974790 | 381.655278 | 68276893 |
| 2 | EfficientNet B1 | 240 | RMSprop | 10 | 0.959698 | 0.99125 | 0.985171 | 349.486195 | 90463361 |
| 3 | EfficientNet B2 | 260 | RMSprop | 10 | 0.965239 | 0.99500 | 0.983193 | 380.205944 | 124555763 |
# Add rescaling and augmentation to ImageDataGenerator for the training set
train_datagen = ImageDataGenerator(rescale = 1./255., rotation_range = 40, width_shift_range = 0.2, height_shift_range = 0.2, shear_range = 0.2, zoom_range = 0.2, horizontal_flip = True, validation_split=0.1) # set validation split
# Rescale validation set. No augmentation on the validation set.
validation_datagen = ImageDataGenerator(rescale = 1./255.,validation_split=0.1) # set validation split
#Read images directly from directory.
train_generator = train_datagen.flow_from_directory(train_dir, seed = 42, shuffle = True, batch_size = 20, class_mode = 'binary', target_size = (300, 300), subset='training') #set as training data
validation_generator = validation_datagen.flow_from_directory(train_dir, seed = 42, shuffle = True, batch_size = 20, class_mode = 'binary', target_size = (300, 300), subset='validation') # same directory as training data. Set as validation data
Found 7205 images belonging to 2 classes. Found 800 images belonging to 2 classes.
#Instantiates the EfficientNet architecture
base_model = efn.EfficientNetB3(input_shape = (300, 300, 3), include_top = False, weights = 'imagenet')
Downloading data from https://github.com/Callidior/keras-applications/releases/download/efficientnet/efficientnet-b3_weights_tf_dim_ordering_tf_kernels_autoaugment_notop.h5 44107200/44107200 [==============================] - 3s 0us/step
# Set trainable attribute to false for all of the base model layers
for layer in base_model.layers:
layer.trainable = False
#Build on top of existing base model.
x = base_model.output
x = layers.Flatten()(x) #convert to 1D array
x = layers.Dense(1024, activation="relu")(x) #fully connected layer with 1,024 hidden units and ReLU activation
x = layers.Dropout(0.5)(x) #Drops 50% of inputs to zero at each training iteration (prevents overfitting)
# Add a final sigmoid layer with 1 node for classification output (probability between 0 and 1)
predictions = layers.Dense(1, activation="sigmoid")(x)
model_final = Model(inputs = base_model.input, outputs = predictions)
#Print model summary
from torchsummary import summary
model_sum = model_final.summary()
Model: "model_3"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_4 (InputLayer) [(None, 300, 300, 3 0 []
)]
stem_conv (Conv2D) (None, 150, 150, 40 1080 ['input_4[0][0]']
)
stem_bn (BatchNormalization) (None, 150, 150, 40 160 ['stem_conv[0][0]']
)
stem_activation (Activation) (None, 150, 150, 40 0 ['stem_bn[0][0]']
)
block1a_dwconv (DepthwiseConv2 (None, 150, 150, 40 360 ['stem_activation[0][0]']
D) )
block1a_bn (BatchNormalization (None, 150, 150, 40 160 ['block1a_dwconv[0][0]']
) )
block1a_activation (Activation (None, 150, 150, 40 0 ['block1a_bn[0][0]']
) )
block1a_se_squeeze (GlobalAver (None, 40) 0 ['block1a_activation[0][0]']
agePooling2D)
block1a_se_reshape (Reshape) (None, 1, 1, 40) 0 ['block1a_se_squeeze[0][0]']
block1a_se_reduce (Conv2D) (None, 1, 1, 10) 410 ['block1a_se_reshape[0][0]']
block1a_se_expand (Conv2D) (None, 1, 1, 40) 440 ['block1a_se_reduce[0][0]']
block1a_se_excite (Multiply) (None, 150, 150, 40 0 ['block1a_activation[0][0]',
) 'block1a_se_expand[0][0]']
block1a_project_conv (Conv2D) (None, 150, 150, 24 960 ['block1a_se_excite[0][0]']
)
block1a_project_bn (BatchNorma (None, 150, 150, 24 96 ['block1a_project_conv[0][0]']
lization) )
block1b_dwconv (DepthwiseConv2 (None, 150, 150, 24 216 ['block1a_project_bn[0][0]']
D) )
block1b_bn (BatchNormalization (None, 150, 150, 24 96 ['block1b_dwconv[0][0]']
) )
block1b_activation (Activation (None, 150, 150, 24 0 ['block1b_bn[0][0]']
) )
block1b_se_squeeze (GlobalAver (None, 24) 0 ['block1b_activation[0][0]']
agePooling2D)
block1b_se_reshape (Reshape) (None, 1, 1, 24) 0 ['block1b_se_squeeze[0][0]']
block1b_se_reduce (Conv2D) (None, 1, 1, 6) 150 ['block1b_se_reshape[0][0]']
block1b_se_expand (Conv2D) (None, 1, 1, 24) 168 ['block1b_se_reduce[0][0]']
block1b_se_excite (Multiply) (None, 150, 150, 24 0 ['block1b_activation[0][0]',
) 'block1b_se_expand[0][0]']
block1b_project_conv (Conv2D) (None, 150, 150, 24 576 ['block1b_se_excite[0][0]']
)
block1b_project_bn (BatchNorma (None, 150, 150, 24 96 ['block1b_project_conv[0][0]']
lization) )
block1b_drop (FixedDropout) (None, 150, 150, 24 0 ['block1b_project_bn[0][0]']
)
block1b_add (Add) (None, 150, 150, 24 0 ['block1b_drop[0][0]',
) 'block1a_project_bn[0][0]']
block2a_expand_conv (Conv2D) (None, 150, 150, 14 3456 ['block1b_add[0][0]']
4)
block2a_expand_bn (BatchNormal (None, 150, 150, 14 576 ['block2a_expand_conv[0][0]']
ization) 4)
block2a_expand_activation (Act (None, 150, 150, 14 0 ['block2a_expand_bn[0][0]']
ivation) 4)
block2a_dwconv (DepthwiseConv2 (None, 75, 75, 144) 1296 ['block2a_expand_activation[0][0]
D) ']
block2a_bn (BatchNormalization (None, 75, 75, 144) 576 ['block2a_dwconv[0][0]']
)
block2a_activation (Activation (None, 75, 75, 144) 0 ['block2a_bn[0][0]']
)
block2a_se_squeeze (GlobalAver (None, 144) 0 ['block2a_activation[0][0]']
agePooling2D)
block2a_se_reshape (Reshape) (None, 1, 1, 144) 0 ['block2a_se_squeeze[0][0]']
block2a_se_reduce (Conv2D) (None, 1, 1, 6) 870 ['block2a_se_reshape[0][0]']
block2a_se_expand (Conv2D) (None, 1, 1, 144) 1008 ['block2a_se_reduce[0][0]']
block2a_se_excite (Multiply) (None, 75, 75, 144) 0 ['block2a_activation[0][0]',
'block2a_se_expand[0][0]']
block2a_project_conv (Conv2D) (None, 75, 75, 32) 4608 ['block2a_se_excite[0][0]']
block2a_project_bn (BatchNorma (None, 75, 75, 32) 128 ['block2a_project_conv[0][0]']
lization)
block2b_expand_conv (Conv2D) (None, 75, 75, 192) 6144 ['block2a_project_bn[0][0]']
block2b_expand_bn (BatchNormal (None, 75, 75, 192) 768 ['block2b_expand_conv[0][0]']
ization)
block2b_expand_activation (Act (None, 75, 75, 192) 0 ['block2b_expand_bn[0][0]']
ivation)
block2b_dwconv (DepthwiseConv2 (None, 75, 75, 192) 1728 ['block2b_expand_activation[0][0]
D) ']
block2b_bn (BatchNormalization (None, 75, 75, 192) 768 ['block2b_dwconv[0][0]']
)
block2b_activation (Activation (None, 75, 75, 192) 0 ['block2b_bn[0][0]']
)
block2b_se_squeeze (GlobalAver (None, 192) 0 ['block2b_activation[0][0]']
agePooling2D)
block2b_se_reshape (Reshape) (None, 1, 1, 192) 0 ['block2b_se_squeeze[0][0]']
block2b_se_reduce (Conv2D) (None, 1, 1, 8) 1544 ['block2b_se_reshape[0][0]']
block2b_se_expand (Conv2D) (None, 1, 1, 192) 1728 ['block2b_se_reduce[0][0]']
block2b_se_excite (Multiply) (None, 75, 75, 192) 0 ['block2b_activation[0][0]',
'block2b_se_expand[0][0]']
block2b_project_conv (Conv2D) (None, 75, 75, 32) 6144 ['block2b_se_excite[0][0]']
block2b_project_bn (BatchNorma (None, 75, 75, 32) 128 ['block2b_project_conv[0][0]']
lization)
block2b_drop (FixedDropout) (None, 75, 75, 32) 0 ['block2b_project_bn[0][0]']
block2b_add (Add) (None, 75, 75, 32) 0 ['block2b_drop[0][0]',
'block2a_project_bn[0][0]']
block2c_expand_conv (Conv2D) (None, 75, 75, 192) 6144 ['block2b_add[0][0]']
block2c_expand_bn (BatchNormal (None, 75, 75, 192) 768 ['block2c_expand_conv[0][0]']
ization)
block2c_expand_activation (Act (None, 75, 75, 192) 0 ['block2c_expand_bn[0][0]']
ivation)
block2c_dwconv (DepthwiseConv2 (None, 75, 75, 192) 1728 ['block2c_expand_activation[0][0]
D) ']
block2c_bn (BatchNormalization (None, 75, 75, 192) 768 ['block2c_dwconv[0][0]']
)
block2c_activation (Activation (None, 75, 75, 192) 0 ['block2c_bn[0][0]']
)
block2c_se_squeeze (GlobalAver (None, 192) 0 ['block2c_activation[0][0]']
agePooling2D)
block2c_se_reshape (Reshape) (None, 1, 1, 192) 0 ['block2c_se_squeeze[0][0]']
block2c_se_reduce (Conv2D) (None, 1, 1, 8) 1544 ['block2c_se_reshape[0][0]']
block2c_se_expand (Conv2D) (None, 1, 1, 192) 1728 ['block2c_se_reduce[0][0]']
block2c_se_excite (Multiply) (None, 75, 75, 192) 0 ['block2c_activation[0][0]',
'block2c_se_expand[0][0]']
block2c_project_conv (Conv2D) (None, 75, 75, 32) 6144 ['block2c_se_excite[0][0]']
block2c_project_bn (BatchNorma (None, 75, 75, 32) 128 ['block2c_project_conv[0][0]']
lization)
block2c_drop (FixedDropout) (None, 75, 75, 32) 0 ['block2c_project_bn[0][0]']
block2c_add (Add) (None, 75, 75, 32) 0 ['block2c_drop[0][0]',
'block2b_add[0][0]']
block3a_expand_conv (Conv2D) (None, 75, 75, 192) 6144 ['block2c_add[0][0]']
block3a_expand_bn (BatchNormal (None, 75, 75, 192) 768 ['block3a_expand_conv[0][0]']
ization)
block3a_expand_activation (Act (None, 75, 75, 192) 0 ['block3a_expand_bn[0][0]']
ivation)
block3a_dwconv (DepthwiseConv2 (None, 38, 38, 192) 4800 ['block3a_expand_activation[0][0]
D) ']
block3a_bn (BatchNormalization (None, 38, 38, 192) 768 ['block3a_dwconv[0][0]']
)
block3a_activation (Activation (None, 38, 38, 192) 0 ['block3a_bn[0][0]']
)
block3a_se_squeeze (GlobalAver (None, 192) 0 ['block3a_activation[0][0]']
agePooling2D)
block3a_se_reshape (Reshape) (None, 1, 1, 192) 0 ['block3a_se_squeeze[0][0]']
block3a_se_reduce (Conv2D) (None, 1, 1, 8) 1544 ['block3a_se_reshape[0][0]']
block3a_se_expand (Conv2D) (None, 1, 1, 192) 1728 ['block3a_se_reduce[0][0]']
block3a_se_excite (Multiply) (None, 38, 38, 192) 0 ['block3a_activation[0][0]',
'block3a_se_expand[0][0]']
block3a_project_conv (Conv2D) (None, 38, 38, 48) 9216 ['block3a_se_excite[0][0]']
block3a_project_bn (BatchNorma (None, 38, 38, 48) 192 ['block3a_project_conv[0][0]']
lization)
block3b_expand_conv (Conv2D) (None, 38, 38, 288) 13824 ['block3a_project_bn[0][0]']
block3b_expand_bn (BatchNormal (None, 38, 38, 288) 1152 ['block3b_expand_conv[0][0]']
ization)
block3b_expand_activation (Act (None, 38, 38, 288) 0 ['block3b_expand_bn[0][0]']
ivation)
block3b_dwconv (DepthwiseConv2 (None, 38, 38, 288) 7200 ['block3b_expand_activation[0][0]
D) ']
block3b_bn (BatchNormalization (None, 38, 38, 288) 1152 ['block3b_dwconv[0][0]']
)
block3b_activation (Activation (None, 38, 38, 288) 0 ['block3b_bn[0][0]']
)
block3b_se_squeeze (GlobalAver (None, 288) 0 ['block3b_activation[0][0]']
agePooling2D)
block3b_se_reshape (Reshape) (None, 1, 1, 288) 0 ['block3b_se_squeeze[0][0]']
block3b_se_reduce (Conv2D) (None, 1, 1, 12) 3468 ['block3b_se_reshape[0][0]']
block3b_se_expand (Conv2D) (None, 1, 1, 288) 3744 ['block3b_se_reduce[0][0]']
block3b_se_excite (Multiply) (None, 38, 38, 288) 0 ['block3b_activation[0][0]',
'block3b_se_expand[0][0]']
block3b_project_conv (Conv2D) (None, 38, 38, 48) 13824 ['block3b_se_excite[0][0]']
block3b_project_bn (BatchNorma (None, 38, 38, 48) 192 ['block3b_project_conv[0][0]']
lization)
block3b_drop (FixedDropout) (None, 38, 38, 48) 0 ['block3b_project_bn[0][0]']
block3b_add (Add) (None, 38, 38, 48) 0 ['block3b_drop[0][0]',
'block3a_project_bn[0][0]']
block3c_expand_conv (Conv2D) (None, 38, 38, 288) 13824 ['block3b_add[0][0]']
block3c_expand_bn (BatchNormal (None, 38, 38, 288) 1152 ['block3c_expand_conv[0][0]']
ization)
block3c_expand_activation (Act (None, 38, 38, 288) 0 ['block3c_expand_bn[0][0]']
ivation)
block3c_dwconv (DepthwiseConv2 (None, 38, 38, 288) 7200 ['block3c_expand_activation[0][0]
D) ']
block3c_bn (BatchNormalization (None, 38, 38, 288) 1152 ['block3c_dwconv[0][0]']
)
block3c_activation (Activation (None, 38, 38, 288) 0 ['block3c_bn[0][0]']
)
block3c_se_squeeze (GlobalAver (None, 288) 0 ['block3c_activation[0][0]']
agePooling2D)
block3c_se_reshape (Reshape) (None, 1, 1, 288) 0 ['block3c_se_squeeze[0][0]']
block3c_se_reduce (Conv2D) (None, 1, 1, 12) 3468 ['block3c_se_reshape[0][0]']
block3c_se_expand (Conv2D) (None, 1, 1, 288) 3744 ['block3c_se_reduce[0][0]']
block3c_se_excite (Multiply) (None, 38, 38, 288) 0 ['block3c_activation[0][0]',
'block3c_se_expand[0][0]']
block3c_project_conv (Conv2D) (None, 38, 38, 48) 13824 ['block3c_se_excite[0][0]']
block3c_project_bn (BatchNorma (None, 38, 38, 48) 192 ['block3c_project_conv[0][0]']
lization)
block3c_drop (FixedDropout) (None, 38, 38, 48) 0 ['block3c_project_bn[0][0]']
block3c_add (Add) (None, 38, 38, 48) 0 ['block3c_drop[0][0]',
'block3b_add[0][0]']
block4a_expand_conv (Conv2D) (None, 38, 38, 288) 13824 ['block3c_add[0][0]']
block4a_expand_bn (BatchNormal (None, 38, 38, 288) 1152 ['block4a_expand_conv[0][0]']
ization)
block4a_expand_activation (Act (None, 38, 38, 288) 0 ['block4a_expand_bn[0][0]']
ivation)
block4a_dwconv (DepthwiseConv2 (None, 19, 19, 288) 2592 ['block4a_expand_activation[0][0]
D) ']
block4a_bn (BatchNormalization (None, 19, 19, 288) 1152 ['block4a_dwconv[0][0]']
)
block4a_activation (Activation (None, 19, 19, 288) 0 ['block4a_bn[0][0]']
)
block4a_se_squeeze (GlobalAver (None, 288) 0 ['block4a_activation[0][0]']
agePooling2D)
block4a_se_reshape (Reshape) (None, 1, 1, 288) 0 ['block4a_se_squeeze[0][0]']
block4a_se_reduce (Conv2D) (None, 1, 1, 12) 3468 ['block4a_se_reshape[0][0]']
block4a_se_expand (Conv2D) (None, 1, 1, 288) 3744 ['block4a_se_reduce[0][0]']
block4a_se_excite (Multiply) (None, 19, 19, 288) 0 ['block4a_activation[0][0]',
'block4a_se_expand[0][0]']
block4a_project_conv (Conv2D) (None, 19, 19, 96) 27648 ['block4a_se_excite[0][0]']
block4a_project_bn (BatchNorma (None, 19, 19, 96) 384 ['block4a_project_conv[0][0]']
lization)
block4b_expand_conv (Conv2D) (None, 19, 19, 576) 55296 ['block4a_project_bn[0][0]']
block4b_expand_bn (BatchNormal (None, 19, 19, 576) 2304 ['block4b_expand_conv[0][0]']
ization)
block4b_expand_activation (Act (None, 19, 19, 576) 0 ['block4b_expand_bn[0][0]']
ivation)
block4b_dwconv (DepthwiseConv2 (None, 19, 19, 576) 5184 ['block4b_expand_activation[0][0]
D) ']
block4b_bn (BatchNormalization (None, 19, 19, 576) 2304 ['block4b_dwconv[0][0]']
)
block4b_activation (Activation (None, 19, 19, 576) 0 ['block4b_bn[0][0]']
)
block4b_se_squeeze (GlobalAver (None, 576) 0 ['block4b_activation[0][0]']
agePooling2D)
block4b_se_reshape (Reshape) (None, 1, 1, 576) 0 ['block4b_se_squeeze[0][0]']
block4b_se_reduce (Conv2D) (None, 1, 1, 24) 13848 ['block4b_se_reshape[0][0]']
block4b_se_expand (Conv2D) (None, 1, 1, 576) 14400 ['block4b_se_reduce[0][0]']
block4b_se_excite (Multiply) (None, 19, 19, 576) 0 ['block4b_activation[0][0]',
'block4b_se_expand[0][0]']
block4b_project_conv (Conv2D) (None, 19, 19, 96) 55296 ['block4b_se_excite[0][0]']
block4b_project_bn (BatchNorma (None, 19, 19, 96) 384 ['block4b_project_conv[0][0]']
lization)
block4b_drop (FixedDropout) (None, 19, 19, 96) 0 ['block4b_project_bn[0][0]']
block4b_add (Add) (None, 19, 19, 96) 0 ['block4b_drop[0][0]',
'block4a_project_bn[0][0]']
block4c_expand_conv (Conv2D) (None, 19, 19, 576) 55296 ['block4b_add[0][0]']
block4c_expand_bn (BatchNormal (None, 19, 19, 576) 2304 ['block4c_expand_conv[0][0]']
ization)
block4c_expand_activation (Act (None, 19, 19, 576) 0 ['block4c_expand_bn[0][0]']
ivation)
block4c_dwconv (DepthwiseConv2 (None, 19, 19, 576) 5184 ['block4c_expand_activation[0][0]
D) ']
block4c_bn (BatchNormalization (None, 19, 19, 576) 2304 ['block4c_dwconv[0][0]']
)
block4c_activation (Activation (None, 19, 19, 576) 0 ['block4c_bn[0][0]']
)
block4c_se_squeeze (GlobalAver (None, 576) 0 ['block4c_activation[0][0]']
agePooling2D)
block4c_se_reshape (Reshape) (None, 1, 1, 576) 0 ['block4c_se_squeeze[0][0]']
block4c_se_reduce (Conv2D) (None, 1, 1, 24) 13848 ['block4c_se_reshape[0][0]']
block4c_se_expand (Conv2D) (None, 1, 1, 576) 14400 ['block4c_se_reduce[0][0]']
block4c_se_excite (Multiply) (None, 19, 19, 576) 0 ['block4c_activation[0][0]',
'block4c_se_expand[0][0]']
block4c_project_conv (Conv2D) (None, 19, 19, 96) 55296 ['block4c_se_excite[0][0]']
block4c_project_bn (BatchNorma (None, 19, 19, 96) 384 ['block4c_project_conv[0][0]']
lization)
block4c_drop (FixedDropout) (None, 19, 19, 96) 0 ['block4c_project_bn[0][0]']
block4c_add (Add) (None, 19, 19, 96) 0 ['block4c_drop[0][0]',
'block4b_add[0][0]']
block4d_expand_conv (Conv2D) (None, 19, 19, 576) 55296 ['block4c_add[0][0]']
block4d_expand_bn (BatchNormal (None, 19, 19, 576) 2304 ['block4d_expand_conv[0][0]']
ization)
block4d_expand_activation (Act (None, 19, 19, 576) 0 ['block4d_expand_bn[0][0]']
ivation)
block4d_dwconv (DepthwiseConv2 (None, 19, 19, 576) 5184 ['block4d_expand_activation[0][0]
D) ']
block4d_bn (BatchNormalization (None, 19, 19, 576) 2304 ['block4d_dwconv[0][0]']
)
block4d_activation (Activation (None, 19, 19, 576) 0 ['block4d_bn[0][0]']
)
block4d_se_squeeze (GlobalAver (None, 576) 0 ['block4d_activation[0][0]']
agePooling2D)
block4d_se_reshape (Reshape) (None, 1, 1, 576) 0 ['block4d_se_squeeze[0][0]']
block4d_se_reduce (Conv2D) (None, 1, 1, 24) 13848 ['block4d_se_reshape[0][0]']
block4d_se_expand (Conv2D) (None, 1, 1, 576) 14400 ['block4d_se_reduce[0][0]']
block4d_se_excite (Multiply) (None, 19, 19, 576) 0 ['block4d_activation[0][0]',
'block4d_se_expand[0][0]']
block4d_project_conv (Conv2D) (None, 19, 19, 96) 55296 ['block4d_se_excite[0][0]']
block4d_project_bn (BatchNorma (None, 19, 19, 96) 384 ['block4d_project_conv[0][0]']
lization)
block4d_drop (FixedDropout) (None, 19, 19, 96) 0 ['block4d_project_bn[0][0]']
block4d_add (Add) (None, 19, 19, 96) 0 ['block4d_drop[0][0]',
'block4c_add[0][0]']
block4e_expand_conv (Conv2D) (None, 19, 19, 576) 55296 ['block4d_add[0][0]']
block4e_expand_bn (BatchNormal (None, 19, 19, 576) 2304 ['block4e_expand_conv[0][0]']
ization)
block4e_expand_activation (Act (None, 19, 19, 576) 0 ['block4e_expand_bn[0][0]']
ivation)
block4e_dwconv (DepthwiseConv2 (None, 19, 19, 576) 5184 ['block4e_expand_activation[0][0]
D) ']
block4e_bn (BatchNormalization (None, 19, 19, 576) 2304 ['block4e_dwconv[0][0]']
)
block4e_activation (Activation (None, 19, 19, 576) 0 ['block4e_bn[0][0]']
)
block4e_se_squeeze (GlobalAver (None, 576) 0 ['block4e_activation[0][0]']
agePooling2D)
block4e_se_reshape (Reshape) (None, 1, 1, 576) 0 ['block4e_se_squeeze[0][0]']
block4e_se_reduce (Conv2D) (None, 1, 1, 24) 13848 ['block4e_se_reshape[0][0]']
block4e_se_expand (Conv2D) (None, 1, 1, 576) 14400 ['block4e_se_reduce[0][0]']
block4e_se_excite (Multiply) (None, 19, 19, 576) 0 ['block4e_activation[0][0]',
'block4e_se_expand[0][0]']
block4e_project_conv (Conv2D) (None, 19, 19, 96) 55296 ['block4e_se_excite[0][0]']
block4e_project_bn (BatchNorma (None, 19, 19, 96) 384 ['block4e_project_conv[0][0]']
lization)
block4e_drop (FixedDropout) (None, 19, 19, 96) 0 ['block4e_project_bn[0][0]']
block4e_add (Add) (None, 19, 19, 96) 0 ['block4e_drop[0][0]',
'block4d_add[0][0]']
block5a_expand_conv (Conv2D) (None, 19, 19, 576) 55296 ['block4e_add[0][0]']
block5a_expand_bn (BatchNormal (None, 19, 19, 576) 2304 ['block5a_expand_conv[0][0]']
ization)
block5a_expand_activation (Act (None, 19, 19, 576) 0 ['block5a_expand_bn[0][0]']
ivation)
block5a_dwconv (DepthwiseConv2 (None, 19, 19, 576) 14400 ['block5a_expand_activation[0][0]
D) ']
block5a_bn (BatchNormalization (None, 19, 19, 576) 2304 ['block5a_dwconv[0][0]']
)
block5a_activation (Activation (None, 19, 19, 576) 0 ['block5a_bn[0][0]']
)
block5a_se_squeeze (GlobalAver (None, 576) 0 ['block5a_activation[0][0]']
agePooling2D)
block5a_se_reshape (Reshape) (None, 1, 1, 576) 0 ['block5a_se_squeeze[0][0]']
block5a_se_reduce (Conv2D) (None, 1, 1, 24) 13848 ['block5a_se_reshape[0][0]']
block5a_se_expand (Conv2D) (None, 1, 1, 576) 14400 ['block5a_se_reduce[0][0]']
block5a_se_excite (Multiply) (None, 19, 19, 576) 0 ['block5a_activation[0][0]',
'block5a_se_expand[0][0]']
block5a_project_conv (Conv2D) (None, 19, 19, 136) 78336 ['block5a_se_excite[0][0]']
block5a_project_bn (BatchNorma (None, 19, 19, 136) 544 ['block5a_project_conv[0][0]']
lization)
block5b_expand_conv (Conv2D) (None, 19, 19, 816) 110976 ['block5a_project_bn[0][0]']
block5b_expand_bn (BatchNormal (None, 19, 19, 816) 3264 ['block5b_expand_conv[0][0]']
ization)
block5b_expand_activation (Act (None, 19, 19, 816) 0 ['block5b_expand_bn[0][0]']
ivation)
block5b_dwconv (DepthwiseConv2 (None, 19, 19, 816) 20400 ['block5b_expand_activation[0][0]
D) ']
block5b_bn (BatchNormalization (None, 19, 19, 816) 3264 ['block5b_dwconv[0][0]']
)
block5b_activation (Activation (None, 19, 19, 816) 0 ['block5b_bn[0][0]']
)
block5b_se_squeeze (GlobalAver (None, 816) 0 ['block5b_activation[0][0]']
agePooling2D)
block5b_se_reshape (Reshape) (None, 1, 1, 816) 0 ['block5b_se_squeeze[0][0]']
block5b_se_reduce (Conv2D) (None, 1, 1, 34) 27778 ['block5b_se_reshape[0][0]']
block5b_se_expand (Conv2D) (None, 1, 1, 816) 28560 ['block5b_se_reduce[0][0]']
block5b_se_excite (Multiply) (None, 19, 19, 816) 0 ['block5b_activation[0][0]',
'block5b_se_expand[0][0]']
block5b_project_conv (Conv2D) (None, 19, 19, 136) 110976 ['block5b_se_excite[0][0]']
block5b_project_bn (BatchNorma (None, 19, 19, 136) 544 ['block5b_project_conv[0][0]']
lization)
block5b_drop (FixedDropout) (None, 19, 19, 136) 0 ['block5b_project_bn[0][0]']
block5b_add (Add) (None, 19, 19, 136) 0 ['block5b_drop[0][0]',
'block5a_project_bn[0][0]']
block5c_expand_conv (Conv2D) (None, 19, 19, 816) 110976 ['block5b_add[0][0]']
block5c_expand_bn (BatchNormal (None, 19, 19, 816) 3264 ['block5c_expand_conv[0][0]']
ization)
block5c_expand_activation (Act (None, 19, 19, 816) 0 ['block5c_expand_bn[0][0]']
ivation)
block5c_dwconv (DepthwiseConv2 (None, 19, 19, 816) 20400 ['block5c_expand_activation[0][0]
D) ']
block5c_bn (BatchNormalization (None, 19, 19, 816) 3264 ['block5c_dwconv[0][0]']
)
block5c_activation (Activation (None, 19, 19, 816) 0 ['block5c_bn[0][0]']
)
block5c_se_squeeze (GlobalAver (None, 816) 0 ['block5c_activation[0][0]']
agePooling2D)
block5c_se_reshape (Reshape) (None, 1, 1, 816) 0 ['block5c_se_squeeze[0][0]']
block5c_se_reduce (Conv2D) (None, 1, 1, 34) 27778 ['block5c_se_reshape[0][0]']
block5c_se_expand (Conv2D) (None, 1, 1, 816) 28560 ['block5c_se_reduce[0][0]']
block5c_se_excite (Multiply) (None, 19, 19, 816) 0 ['block5c_activation[0][0]',
'block5c_se_expand[0][0]']
block5c_project_conv (Conv2D) (None, 19, 19, 136) 110976 ['block5c_se_excite[0][0]']
block5c_project_bn (BatchNorma (None, 19, 19, 136) 544 ['block5c_project_conv[0][0]']
lization)
block5c_drop (FixedDropout) (None, 19, 19, 136) 0 ['block5c_project_bn[0][0]']
block5c_add (Add) (None, 19, 19, 136) 0 ['block5c_drop[0][0]',
'block5b_add[0][0]']
block5d_expand_conv (Conv2D) (None, 19, 19, 816) 110976 ['block5c_add[0][0]']
block5d_expand_bn (BatchNormal (None, 19, 19, 816) 3264 ['block5d_expand_conv[0][0]']
ization)
block5d_expand_activation (Act (None, 19, 19, 816) 0 ['block5d_expand_bn[0][0]']
ivation)
block5d_dwconv (DepthwiseConv2 (None, 19, 19, 816) 20400 ['block5d_expand_activation[0][0]
D) ']
block5d_bn (BatchNormalization (None, 19, 19, 816) 3264 ['block5d_dwconv[0][0]']
)
block5d_activation (Activation (None, 19, 19, 816) 0 ['block5d_bn[0][0]']
)
block5d_se_squeeze (GlobalAver (None, 816) 0 ['block5d_activation[0][0]']
agePooling2D)
block5d_se_reshape (Reshape) (None, 1, 1, 816) 0 ['block5d_se_squeeze[0][0]']
block5d_se_reduce (Conv2D) (None, 1, 1, 34) 27778 ['block5d_se_reshape[0][0]']
block5d_se_expand (Conv2D) (None, 1, 1, 816) 28560 ['block5d_se_reduce[0][0]']
block5d_se_excite (Multiply) (None, 19, 19, 816) 0 ['block5d_activation[0][0]',
'block5d_se_expand[0][0]']
block5d_project_conv (Conv2D) (None, 19, 19, 136) 110976 ['block5d_se_excite[0][0]']
block5d_project_bn (BatchNorma (None, 19, 19, 136) 544 ['block5d_project_conv[0][0]']
lization)
block5d_drop (FixedDropout) (None, 19, 19, 136) 0 ['block5d_project_bn[0][0]']
block5d_add (Add) (None, 19, 19, 136) 0 ['block5d_drop[0][0]',
'block5c_add[0][0]']
block5e_expand_conv (Conv2D) (None, 19, 19, 816) 110976 ['block5d_add[0][0]']
block5e_expand_bn (BatchNormal (None, 19, 19, 816) 3264 ['block5e_expand_conv[0][0]']
ization)
block5e_expand_activation (Act (None, 19, 19, 816) 0 ['block5e_expand_bn[0][0]']
ivation)
block5e_dwconv (DepthwiseConv2 (None, 19, 19, 816) 20400 ['block5e_expand_activation[0][0]
D) ']
block5e_bn (BatchNormalization (None, 19, 19, 816) 3264 ['block5e_dwconv[0][0]']
)
block5e_activation (Activation (None, 19, 19, 816) 0 ['block5e_bn[0][0]']
)
block5e_se_squeeze (GlobalAver (None, 816) 0 ['block5e_activation[0][0]']
agePooling2D)
block5e_se_reshape (Reshape) (None, 1, 1, 816) 0 ['block5e_se_squeeze[0][0]']
block5e_se_reduce (Conv2D) (None, 1, 1, 34) 27778 ['block5e_se_reshape[0][0]']
block5e_se_expand (Conv2D) (None, 1, 1, 816) 28560 ['block5e_se_reduce[0][0]']
block5e_se_excite (Multiply) (None, 19, 19, 816) 0 ['block5e_activation[0][0]',
'block5e_se_expand[0][0]']
block5e_project_conv (Conv2D) (None, 19, 19, 136) 110976 ['block5e_se_excite[0][0]']
block5e_project_bn (BatchNorma (None, 19, 19, 136) 544 ['block5e_project_conv[0][0]']
lization)
block5e_drop (FixedDropout) (None, 19, 19, 136) 0 ['block5e_project_bn[0][0]']
block5e_add (Add) (None, 19, 19, 136) 0 ['block5e_drop[0][0]',
'block5d_add[0][0]']
block6a_expand_conv (Conv2D) (None, 19, 19, 816) 110976 ['block5e_add[0][0]']
block6a_expand_bn (BatchNormal (None, 19, 19, 816) 3264 ['block6a_expand_conv[0][0]']
ization)
block6a_expand_activation (Act (None, 19, 19, 816) 0 ['block6a_expand_bn[0][0]']
ivation)
block6a_dwconv (DepthwiseConv2 (None, 10, 10, 816) 20400 ['block6a_expand_activation[0][0]
D) ']
block6a_bn (BatchNormalization (None, 10, 10, 816) 3264 ['block6a_dwconv[0][0]']
)
block6a_activation (Activation (None, 10, 10, 816) 0 ['block6a_bn[0][0]']
)
block6a_se_squeeze (GlobalAver (None, 816) 0 ['block6a_activation[0][0]']
agePooling2D)
block6a_se_reshape (Reshape) (None, 1, 1, 816) 0 ['block6a_se_squeeze[0][0]']
block6a_se_reduce (Conv2D) (None, 1, 1, 34) 27778 ['block6a_se_reshape[0][0]']
block6a_se_expand (Conv2D) (None, 1, 1, 816) 28560 ['block6a_se_reduce[0][0]']
block6a_se_excite (Multiply) (None, 10, 10, 816) 0 ['block6a_activation[0][0]',
'block6a_se_expand[0][0]']
block6a_project_conv (Conv2D) (None, 10, 10, 232) 189312 ['block6a_se_excite[0][0]']
block6a_project_bn (BatchNorma (None, 10, 10, 232) 928 ['block6a_project_conv[0][0]']
lization)
block6b_expand_conv (Conv2D) (None, 10, 10, 1392 322944 ['block6a_project_bn[0][0]']
)
block6b_expand_bn (BatchNormal (None, 10, 10, 1392 5568 ['block6b_expand_conv[0][0]']
ization) )
block6b_expand_activation (Act (None, 10, 10, 1392 0 ['block6b_expand_bn[0][0]']
ivation) )
block6b_dwconv (DepthwiseConv2 (None, 10, 10, 1392 34800 ['block6b_expand_activation[0][0]
D) ) ']
block6b_bn (BatchNormalization (None, 10, 10, 1392 5568 ['block6b_dwconv[0][0]']
) )
block6b_activation (Activation (None, 10, 10, 1392 0 ['block6b_bn[0][0]']
) )
block6b_se_squeeze (GlobalAver (None, 1392) 0 ['block6b_activation[0][0]']
agePooling2D)
block6b_se_reshape (Reshape) (None, 1, 1, 1392) 0 ['block6b_se_squeeze[0][0]']
block6b_se_reduce (Conv2D) (None, 1, 1, 58) 80794 ['block6b_se_reshape[0][0]']
block6b_se_expand (Conv2D) (None, 1, 1, 1392) 82128 ['block6b_se_reduce[0][0]']
block6b_se_excite (Multiply) (None, 10, 10, 1392 0 ['block6b_activation[0][0]',
) 'block6b_se_expand[0][0]']
block6b_project_conv (Conv2D) (None, 10, 10, 232) 322944 ['block6b_se_excite[0][0]']
block6b_project_bn (BatchNorma (None, 10, 10, 232) 928 ['block6b_project_conv[0][0]']
lization)
block6b_drop (FixedDropout) (None, 10, 10, 232) 0 ['block6b_project_bn[0][0]']
block6b_add (Add) (None, 10, 10, 232) 0 ['block6b_drop[0][0]',
'block6a_project_bn[0][0]']
block6c_expand_conv (Conv2D) (None, 10, 10, 1392 322944 ['block6b_add[0][0]']
)
block6c_expand_bn (BatchNormal (None, 10, 10, 1392 5568 ['block6c_expand_conv[0][0]']
ization) )
block6c_expand_activation (Act (None, 10, 10, 1392 0 ['block6c_expand_bn[0][0]']
ivation) )
block6c_dwconv (DepthwiseConv2 (None, 10, 10, 1392 34800 ['block6c_expand_activation[0][0]
D) ) ']
block6c_bn (BatchNormalization (None, 10, 10, 1392 5568 ['block6c_dwconv[0][0]']
) )
block6c_activation (Activation (None, 10, 10, 1392 0 ['block6c_bn[0][0]']
) )
block6c_se_squeeze (GlobalAver (None, 1392) 0 ['block6c_activation[0][0]']
agePooling2D)
block6c_se_reshape (Reshape) (None, 1, 1, 1392) 0 ['block6c_se_squeeze[0][0]']
block6c_se_reduce (Conv2D) (None, 1, 1, 58) 80794 ['block6c_se_reshape[0][0]']
block6c_se_expand (Conv2D) (None, 1, 1, 1392) 82128 ['block6c_se_reduce[0][0]']
block6c_se_excite (Multiply) (None, 10, 10, 1392 0 ['block6c_activation[0][0]',
) 'block6c_se_expand[0][0]']
block6c_project_conv (Conv2D) (None, 10, 10, 232) 322944 ['block6c_se_excite[0][0]']
block6c_project_bn (BatchNorma (None, 10, 10, 232) 928 ['block6c_project_conv[0][0]']
lization)
block6c_drop (FixedDropout) (None, 10, 10, 232) 0 ['block6c_project_bn[0][0]']
block6c_add (Add) (None, 10, 10, 232) 0 ['block6c_drop[0][0]',
'block6b_add[0][0]']
block6d_expand_conv (Conv2D) (None, 10, 10, 1392 322944 ['block6c_add[0][0]']
)
block6d_expand_bn (BatchNormal (None, 10, 10, 1392 5568 ['block6d_expand_conv[0][0]']
ization) )
block6d_expand_activation (Act (None, 10, 10, 1392 0 ['block6d_expand_bn[0][0]']
ivation) )
block6d_dwconv (DepthwiseConv2 (None, 10, 10, 1392 34800 ['block6d_expand_activation[0][0]
D) ) ']
block6d_bn (BatchNormalization (None, 10, 10, 1392 5568 ['block6d_dwconv[0][0]']
) )
block6d_activation (Activation (None, 10, 10, 1392 0 ['block6d_bn[0][0]']
) )
block6d_se_squeeze (GlobalAver (None, 1392) 0 ['block6d_activation[0][0]']
agePooling2D)
block6d_se_reshape (Reshape) (None, 1, 1, 1392) 0 ['block6d_se_squeeze[0][0]']
block6d_se_reduce (Conv2D) (None, 1, 1, 58) 80794 ['block6d_se_reshape[0][0]']
block6d_se_expand (Conv2D) (None, 1, 1, 1392) 82128 ['block6d_se_reduce[0][0]']
block6d_se_excite (Multiply) (None, 10, 10, 1392 0 ['block6d_activation[0][0]',
) 'block6d_se_expand[0][0]']
block6d_project_conv (Conv2D) (None, 10, 10, 232) 322944 ['block6d_se_excite[0][0]']
block6d_project_bn (BatchNorma (None, 10, 10, 232) 928 ['block6d_project_conv[0][0]']
lization)
block6d_drop (FixedDropout) (None, 10, 10, 232) 0 ['block6d_project_bn[0][0]']
block6d_add (Add) (None, 10, 10, 232) 0 ['block6d_drop[0][0]',
'block6c_add[0][0]']
block6e_expand_conv (Conv2D) (None, 10, 10, 1392 322944 ['block6d_add[0][0]']
)
block6e_expand_bn (BatchNormal (None, 10, 10, 1392 5568 ['block6e_expand_conv[0][0]']
ization) )
block6e_expand_activation (Act (None, 10, 10, 1392 0 ['block6e_expand_bn[0][0]']
ivation) )
block6e_dwconv (DepthwiseConv2 (None, 10, 10, 1392 34800 ['block6e_expand_activation[0][0]
D) ) ']
block6e_bn (BatchNormalization (None, 10, 10, 1392 5568 ['block6e_dwconv[0][0]']
) )
block6e_activation (Activation (None, 10, 10, 1392 0 ['block6e_bn[0][0]']
) )
block6e_se_squeeze (GlobalAver (None, 1392) 0 ['block6e_activation[0][0]']
agePooling2D)
block6e_se_reshape (Reshape) (None, 1, 1, 1392) 0 ['block6e_se_squeeze[0][0]']
block6e_se_reduce (Conv2D) (None, 1, 1, 58) 80794 ['block6e_se_reshape[0][0]']
block6e_se_expand (Conv2D) (None, 1, 1, 1392) 82128 ['block6e_se_reduce[0][0]']
block6e_se_excite (Multiply) (None, 10, 10, 1392 0 ['block6e_activation[0][0]',
) 'block6e_se_expand[0][0]']
block6e_project_conv (Conv2D) (None, 10, 10, 232) 322944 ['block6e_se_excite[0][0]']
block6e_project_bn (BatchNorma (None, 10, 10, 232) 928 ['block6e_project_conv[0][0]']
lization)
block6e_drop (FixedDropout) (None, 10, 10, 232) 0 ['block6e_project_bn[0][0]']
block6e_add (Add) (None, 10, 10, 232) 0 ['block6e_drop[0][0]',
'block6d_add[0][0]']
block6f_expand_conv (Conv2D) (None, 10, 10, 1392 322944 ['block6e_add[0][0]']
)
block6f_expand_bn (BatchNormal (None, 10, 10, 1392 5568 ['block6f_expand_conv[0][0]']
ization) )
block6f_expand_activation (Act (None, 10, 10, 1392 0 ['block6f_expand_bn[0][0]']
ivation) )
block6f_dwconv (DepthwiseConv2 (None, 10, 10, 1392 34800 ['block6f_expand_activation[0][0]
D) ) ']
block6f_bn (BatchNormalization (None, 10, 10, 1392 5568 ['block6f_dwconv[0][0]']
) )
block6f_activation (Activation (None, 10, 10, 1392 0 ['block6f_bn[0][0]']
) )
block6f_se_squeeze (GlobalAver (None, 1392) 0 ['block6f_activation[0][0]']
agePooling2D)
block6f_se_reshape (Reshape) (None, 1, 1, 1392) 0 ['block6f_se_squeeze[0][0]']
block6f_se_reduce (Conv2D) (None, 1, 1, 58) 80794 ['block6f_se_reshape[0][0]']
block6f_se_expand (Conv2D) (None, 1, 1, 1392) 82128 ['block6f_se_reduce[0][0]']
block6f_se_excite (Multiply) (None, 10, 10, 1392 0 ['block6f_activation[0][0]',
) 'block6f_se_expand[0][0]']
block6f_project_conv (Conv2D) (None, 10, 10, 232) 322944 ['block6f_se_excite[0][0]']
block6f_project_bn (BatchNorma (None, 10, 10, 232) 928 ['block6f_project_conv[0][0]']
lization)
block6f_drop (FixedDropout) (None, 10, 10, 232) 0 ['block6f_project_bn[0][0]']
block6f_add (Add) (None, 10, 10, 232) 0 ['block6f_drop[0][0]',
'block6e_add[0][0]']
block7a_expand_conv (Conv2D) (None, 10, 10, 1392 322944 ['block6f_add[0][0]']
)
block7a_expand_bn (BatchNormal (None, 10, 10, 1392 5568 ['block7a_expand_conv[0][0]']
ization) )
block7a_expand_activation (Act (None, 10, 10, 1392 0 ['block7a_expand_bn[0][0]']
ivation) )
block7a_dwconv (DepthwiseConv2 (None, 10, 10, 1392 12528 ['block7a_expand_activation[0][0]
D) ) ']
block7a_bn (BatchNormalization (None, 10, 10, 1392 5568 ['block7a_dwconv[0][0]']
) )
block7a_activation (Activation (None, 10, 10, 1392 0 ['block7a_bn[0][0]']
) )
block7a_se_squeeze (GlobalAver (None, 1392) 0 ['block7a_activation[0][0]']
agePooling2D)
block7a_se_reshape (Reshape) (None, 1, 1, 1392) 0 ['block7a_se_squeeze[0][0]']
block7a_se_reduce (Conv2D) (None, 1, 1, 58) 80794 ['block7a_se_reshape[0][0]']
block7a_se_expand (Conv2D) (None, 1, 1, 1392) 82128 ['block7a_se_reduce[0][0]']
block7a_se_excite (Multiply) (None, 10, 10, 1392 0 ['block7a_activation[0][0]',
) 'block7a_se_expand[0][0]']
block7a_project_conv (Conv2D) (None, 10, 10, 384) 534528 ['block7a_se_excite[0][0]']
block7a_project_bn (BatchNorma (None, 10, 10, 384) 1536 ['block7a_project_conv[0][0]']
lization)
block7b_expand_conv (Conv2D) (None, 10, 10, 2304 884736 ['block7a_project_bn[0][0]']
)
block7b_expand_bn (BatchNormal (None, 10, 10, 2304 9216 ['block7b_expand_conv[0][0]']
ization) )
block7b_expand_activation (Act (None, 10, 10, 2304 0 ['block7b_expand_bn[0][0]']
ivation) )
block7b_dwconv (DepthwiseConv2 (None, 10, 10, 2304 20736 ['block7b_expand_activation[0][0]
D) ) ']
block7b_bn (BatchNormalization (None, 10, 10, 2304 9216 ['block7b_dwconv[0][0]']
) )
block7b_activation (Activation (None, 10, 10, 2304 0 ['block7b_bn[0][0]']
) )
block7b_se_squeeze (GlobalAver (None, 2304) 0 ['block7b_activation[0][0]']
agePooling2D)
block7b_se_reshape (Reshape) (None, 1, 1, 2304) 0 ['block7b_se_squeeze[0][0]']
block7b_se_reduce (Conv2D) (None, 1, 1, 96) 221280 ['block7b_se_reshape[0][0]']
block7b_se_expand (Conv2D) (None, 1, 1, 2304) 223488 ['block7b_se_reduce[0][0]']
block7b_se_excite (Multiply) (None, 10, 10, 2304 0 ['block7b_activation[0][0]',
) 'block7b_se_expand[0][0]']
block7b_project_conv (Conv2D) (None, 10, 10, 384) 884736 ['block7b_se_excite[0][0]']
block7b_project_bn (BatchNorma (None, 10, 10, 384) 1536 ['block7b_project_conv[0][0]']
lization)
block7b_drop (FixedDropout) (None, 10, 10, 384) 0 ['block7b_project_bn[0][0]']
block7b_add (Add) (None, 10, 10, 384) 0 ['block7b_drop[0][0]',
'block7a_project_bn[0][0]']
top_conv (Conv2D) (None, 10, 10, 1536 589824 ['block7b_add[0][0]']
)
top_bn (BatchNormalization) (None, 10, 10, 1536 6144 ['top_conv[0][0]']
)
top_activation (Activation) (None, 10, 10, 1536 0 ['top_bn[0][0]']
)
flatten_3 (Flatten) (None, 153600) 0 ['top_activation[0][0]']
dense_6 (Dense) (None, 1024) 157287424 ['flatten_3[0][0]']
dropout_3 (Dropout) (None, 1024) 0 ['dense_6[0][0]']
dense_7 (Dense) (None, 1) 1025 ['dropout_3[0][0]']
==================================================================================================
Total params: 168,071,977
Trainable params: 157,288,449
Non-trainable params: 10,783,528
__________________________________________________________________________________________________
#get total parameters
model_params = model_final.count_params()
# Specify the optimizer, loss function and evaluation metrics.
model_final.compile(loss='binary_crossentropy', optimizer=tf.keras.optimizers.RMSprop(learning_rate=0.0001), metrics=['accuracy'])
t1 = time.time()
#train the model
eff_history = model_final.fit_generator(train_generator, validation_data = validation_generator, steps_per_epoch = 100, epochs = 10)
fit_time = time.time() - t1
<ipython-input-85-b7f31b017b18>:3: UserWarning: `Model.fit_generator` is deprecated and will be removed in a future version. Please use `Model.fit`, which supports generators. eff_history = model_final.fit_generator(train_generator, validation_data = validation_generator, steps_per_epoch = 100, epochs = 10)
Epoch 1/10 100/100 [==============================] - 61s 504ms/step - loss: 0.5111 - accuracy: 0.9280 - val_loss: 0.0415 - val_accuracy: 0.9937 Epoch 2/10 100/100 [==============================] - 47s 466ms/step - loss: 0.4081 - accuracy: 0.9620 - val_loss: 0.0409 - val_accuracy: 0.9900 Epoch 3/10 100/100 [==============================] - 46s 461ms/step - loss: 0.4214 - accuracy: 0.9585 - val_loss: 0.0315 - val_accuracy: 0.9912 Epoch 4/10 100/100 [==============================] - 47s 464ms/step - loss: 0.3733 - accuracy: 0.9632 - val_loss: 0.0365 - val_accuracy: 0.9937 Epoch 5/10 100/100 [==============================] - 47s 465ms/step - loss: 0.3327 - accuracy: 0.9620 - val_loss: 0.0403 - val_accuracy: 0.9925 Epoch 6/10 100/100 [==============================] - 47s 469ms/step - loss: 0.3235 - accuracy: 0.9657 - val_loss: 0.0388 - val_accuracy: 0.9925 Epoch 7/10 100/100 [==============================] - 47s 468ms/step - loss: 0.4184 - accuracy: 0.9620 - val_loss: 0.0627 - val_accuracy: 0.9887 Epoch 8/10 100/100 [==============================] - 47s 465ms/step - loss: 0.3922 - accuracy: 0.9655 - val_loss: 0.0273 - val_accuracy: 0.9937 Epoch 9/10 100/100 [==============================] - 47s 464ms/step - loss: 0.3318 - accuracy: 0.9703 - val_loss: 0.0106 - val_accuracy: 0.9962 Epoch 10/10 100/100 [==============================] - 46s 460ms/step - loss: 0.6514 - accuracy: 0.9627 - val_loss: 0.0082 - val_accuracy: 0.9975
# time it took to fit the model
print(fit_time)
516.3287916183472
#Plot training and validation accuracy and loss for each epoch
acc = eff_history.history['accuracy']
val_acc = eff_history.history['val_accuracy']
loss = eff_history.history['loss']
val_loss = eff_history.history['val_loss']
epochs = range(1,len(acc) + 1)
plt.plot(epochs,acc,label = 'Training Accuracy')
plt.plot(epochs,val_acc,label = 'Validation Accuracy')
plt.title('Training and Validation Accuracy')
plt.legend()
plt.figure()
plt.plot(epochs,loss,label = 'Training loss')
plt.plot(epochs,val_loss,label = 'Validation Loss')
plt.title('Training and Validation Loss')
plt.legend()
plt.show()
# Test dataset
test_datagen = ImageDataGenerator(rescale=1./255)
test_generator = test_datagen.flow_from_directory(
test_dir,
target_size=(300, 300),
shuffle = False,
class_mode='binary',
batch_size=1)
Found 2023 images belonging to 2 classes.
#Get test length
filenames = test_generator.filenames
nb_samples = len(filenames)
#Predict on test set
predict = model_final.predict_generator(test_generator,steps = nb_samples)
<ipython-input-90-7710eff794cf>:2: UserWarning: `Model.predict_generator` is deprecated and will be removed in a future version. Please use `Model.predict`, which supports generators. predict = model_final.predict_generator(test_generator,steps = nb_samples)
#Get list of prediction results
pred_list = []
for i in predict:
if i > 0.5:
result = 1 #dog
pred_list.append(result)
else:
result = 0 #cat
pred_list.append(result)
#Create dataframe of image ID, image true label, image predicted label
import pandas as pd
image_ids = [name.split('/')[-1] for name in test_generator.filenames]
image_label = [name.split('/')[0] for name in test_generator.filenames]
data = {'id': image_ids, 'label':image_label, 'prediction':pred_list}
data_df = pd.DataFrame(data)
data_df.label.replace(('cats', 'dogs'), (0, 1), inplace=True) # change cat and dog label to 0 or 1
#Get test accuracy score
from sklearn.metrics import accuracy_score, confusion_matrix
test_accuracy = accuracy_score(data_df['label'], data_df['prediction'])
print('Test Accuracy: ', round((test_accuracy * 100), 2), "%")
Test Accuracy: 98.86 %
from sklearn.metrics import classification_report
#Classification Report
print(classification_report(data_df['label'], data_df['prediction']))
precision recall f1-score support
0 0.99 0.98 0.99 1011
1 0.98 0.99 0.99 1012
accuracy 0.99 2023
macro avg 0.99 0.99 0.99 2023
weighted avg 0.99 0.99 0.99 2023
#Create confusion matrix
import seaborn as sns
label = [0, 1] #0 = cat and 1 = dog
cm = confusion_matrix(data_df['label'], data_df['prediction'], labels = label)
#Plot
ax= plt.subplot()
sns.heatmap(cm, annot=True, fmt='g', ax=ax);
# labels, title and ticks
ax.set_xlabel('Predicted labels');ax.set_ylabel('True labels');
ax.set_title('Confusion Matrix');
ax.xaxis.set_ticklabels(["Cat", "Dog"]); ax.yaxis.set_ticklabels(["Cat", "Dog"])
[Text(0, 0.5, 'Cat'), Text(0, 1.5, 'Dog')]
ExperimentLog.loc[len(ExperimentLog)] = [
"EfficientNet B3",
300,
"RMSprop",
10,
max(acc),
max(val_acc),
test_accuracy,
fit_time,
model_params
]
ExperimentLog
| Base Model | Input Resolution | Optimizer | Epochs | Training Accuracy | Validation Accuracy | Test Accuracy | Fit Time | Total Parameters | |
|---|---|---|---|---|---|---|---|---|---|
| 0 | EfficientNet B0 | 224 | RMSprop | 10 | 0.951500 | 0.98750 | 0.971330 | 2195.619411 | 68276893 |
| 1 | EfficientNet B0 with decay | 224 | RMSprop | 10 | 0.956171 | 0.98750 | 0.974790 | 381.655278 | 68276893 |
| 2 | EfficientNet B1 | 240 | RMSprop | 10 | 0.959698 | 0.99125 | 0.985171 | 349.486195 | 90463361 |
| 3 | EfficientNet B2 | 260 | RMSprop | 10 | 0.965239 | 0.99500 | 0.983193 | 380.205944 | 124555763 |
| 4 | EfficientNet B3 | 300 | RMSprop | 10 | 0.970277 | 0.99750 | 0.988631 | 516.328792 | 168071977 |
# Add rescaling and augmentation to ImageDataGenerator for the training set
train_datagen = ImageDataGenerator(rescale = 1./255., rotation_range = 40, width_shift_range = 0.2, height_shift_range = 0.2, shear_range = 0.2, zoom_range = 0.2, horizontal_flip = True, validation_split=0.1) # set validation split
# Rescale validation set. No augmentation on the validation set.
validation_datagen = ImageDataGenerator(rescale = 1./255.,validation_split=0.1) # set validation split
#Read images directly from directory.
train_generator = train_datagen.flow_from_directory(train_dir, seed = 42, shuffle = True, batch_size = 20, class_mode = 'binary', target_size = (380, 380), subset='training') #set as training data
validation_generator = validation_datagen.flow_from_directory(train_dir, seed = 42, shuffle = True, batch_size = 20, class_mode = 'binary', target_size = (380, 380), subset='validation') # same directory as training data. Set as validation data
Found 7205 images belonging to 2 classes. Found 800 images belonging to 2 classes.
#Instantiates the EfficientNet architecture
base_model = efn.EfficientNetB4(input_shape = (380, 380, 3), include_top = False, weights = 'imagenet')
Downloading data from https://github.com/Callidior/keras-applications/releases/download/efficientnet/efficientnet-b4_weights_tf_dim_ordering_tf_kernels_autoaugment_notop.h5 71892840/71892840 [==============================] - 4s 0us/step
# Set trainable attribute to false for all of the base model layers
for layer in base_model.layers:
layer.trainable = False
#Build on top of existing base model.
x = base_model.output
x = layers.Flatten()(x) #convert to 1D array
x = layers.Dense(1024, activation="relu")(x) #fully connected layer with 1,024 hidden units and ReLU activation
x = layers.Dropout(0.5)(x) #Drops 50% of inputs to zero at each training iteration (prevents overfitting)
# Add a final sigmoid layer with 1 node for classification output (probability between 0 and 1)
predictions = layers.Dense(1, activation="sigmoid")(x)
model_final = Model(inputs = base_model.input, outputs = predictions)
#Print model summary
from torchsummary import summary
model_sum = model_final.summary()
Model: "model_4"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_5 (InputLayer) [(None, 380, 380, 3 0 []
)]
stem_conv (Conv2D) (None, 190, 190, 48 1296 ['input_5[0][0]']
)
stem_bn (BatchNormalization) (None, 190, 190, 48 192 ['stem_conv[0][0]']
)
stem_activation (Activation) (None, 190, 190, 48 0 ['stem_bn[0][0]']
)
block1a_dwconv (DepthwiseConv2 (None, 190, 190, 48 432 ['stem_activation[0][0]']
D) )
block1a_bn (BatchNormalization (None, 190, 190, 48 192 ['block1a_dwconv[0][0]']
) )
block1a_activation (Activation (None, 190, 190, 48 0 ['block1a_bn[0][0]']
) )
block1a_se_squeeze (GlobalAver (None, 48) 0 ['block1a_activation[0][0]']
agePooling2D)
block1a_se_reshape (Reshape) (None, 1, 1, 48) 0 ['block1a_se_squeeze[0][0]']
block1a_se_reduce (Conv2D) (None, 1, 1, 12) 588 ['block1a_se_reshape[0][0]']
block1a_se_expand (Conv2D) (None, 1, 1, 48) 624 ['block1a_se_reduce[0][0]']
block1a_se_excite (Multiply) (None, 190, 190, 48 0 ['block1a_activation[0][0]',
) 'block1a_se_expand[0][0]']
block1a_project_conv (Conv2D) (None, 190, 190, 24 1152 ['block1a_se_excite[0][0]']
)
block1a_project_bn (BatchNorma (None, 190, 190, 24 96 ['block1a_project_conv[0][0]']
lization) )
block1b_dwconv (DepthwiseConv2 (None, 190, 190, 24 216 ['block1a_project_bn[0][0]']
D) )
block1b_bn (BatchNormalization (None, 190, 190, 24 96 ['block1b_dwconv[0][0]']
) )
block1b_activation (Activation (None, 190, 190, 24 0 ['block1b_bn[0][0]']
) )
block1b_se_squeeze (GlobalAver (None, 24) 0 ['block1b_activation[0][0]']
agePooling2D)
block1b_se_reshape (Reshape) (None, 1, 1, 24) 0 ['block1b_se_squeeze[0][0]']
block1b_se_reduce (Conv2D) (None, 1, 1, 6) 150 ['block1b_se_reshape[0][0]']
block1b_se_expand (Conv2D) (None, 1, 1, 24) 168 ['block1b_se_reduce[0][0]']
block1b_se_excite (Multiply) (None, 190, 190, 24 0 ['block1b_activation[0][0]',
) 'block1b_se_expand[0][0]']
block1b_project_conv (Conv2D) (None, 190, 190, 24 576 ['block1b_se_excite[0][0]']
)
block1b_project_bn (BatchNorma (None, 190, 190, 24 96 ['block1b_project_conv[0][0]']
lization) )
block1b_drop (FixedDropout) (None, 190, 190, 24 0 ['block1b_project_bn[0][0]']
)
block1b_add (Add) (None, 190, 190, 24 0 ['block1b_drop[0][0]',
) 'block1a_project_bn[0][0]']
block2a_expand_conv (Conv2D) (None, 190, 190, 14 3456 ['block1b_add[0][0]']
4)
block2a_expand_bn (BatchNormal (None, 190, 190, 14 576 ['block2a_expand_conv[0][0]']
ization) 4)
block2a_expand_activation (Act (None, 190, 190, 14 0 ['block2a_expand_bn[0][0]']
ivation) 4)
block2a_dwconv (DepthwiseConv2 (None, 95, 95, 144) 1296 ['block2a_expand_activation[0][0]
D) ']
block2a_bn (BatchNormalization (None, 95, 95, 144) 576 ['block2a_dwconv[0][0]']
)
block2a_activation (Activation (None, 95, 95, 144) 0 ['block2a_bn[0][0]']
)
block2a_se_squeeze (GlobalAver (None, 144) 0 ['block2a_activation[0][0]']
agePooling2D)
block2a_se_reshape (Reshape) (None, 1, 1, 144) 0 ['block2a_se_squeeze[0][0]']
block2a_se_reduce (Conv2D) (None, 1, 1, 6) 870 ['block2a_se_reshape[0][0]']
block2a_se_expand (Conv2D) (None, 1, 1, 144) 1008 ['block2a_se_reduce[0][0]']
block2a_se_excite (Multiply) (None, 95, 95, 144) 0 ['block2a_activation[0][0]',
'block2a_se_expand[0][0]']
block2a_project_conv (Conv2D) (None, 95, 95, 32) 4608 ['block2a_se_excite[0][0]']
block2a_project_bn (BatchNorma (None, 95, 95, 32) 128 ['block2a_project_conv[0][0]']
lization)
block2b_expand_conv (Conv2D) (None, 95, 95, 192) 6144 ['block2a_project_bn[0][0]']
block2b_expand_bn (BatchNormal (None, 95, 95, 192) 768 ['block2b_expand_conv[0][0]']
ization)
block2b_expand_activation (Act (None, 95, 95, 192) 0 ['block2b_expand_bn[0][0]']
ivation)
block2b_dwconv (DepthwiseConv2 (None, 95, 95, 192) 1728 ['block2b_expand_activation[0][0]
D) ']
block2b_bn (BatchNormalization (None, 95, 95, 192) 768 ['block2b_dwconv[0][0]']
)
block2b_activation (Activation (None, 95, 95, 192) 0 ['block2b_bn[0][0]']
)
block2b_se_squeeze (GlobalAver (None, 192) 0 ['block2b_activation[0][0]']
agePooling2D)
block2b_se_reshape (Reshape) (None, 1, 1, 192) 0 ['block2b_se_squeeze[0][0]']
block2b_se_reduce (Conv2D) (None, 1, 1, 8) 1544 ['block2b_se_reshape[0][0]']
block2b_se_expand (Conv2D) (None, 1, 1, 192) 1728 ['block2b_se_reduce[0][0]']
block2b_se_excite (Multiply) (None, 95, 95, 192) 0 ['block2b_activation[0][0]',
'block2b_se_expand[0][0]']
block2b_project_conv (Conv2D) (None, 95, 95, 32) 6144 ['block2b_se_excite[0][0]']
block2b_project_bn (BatchNorma (None, 95, 95, 32) 128 ['block2b_project_conv[0][0]']
lization)
block2b_drop (FixedDropout) (None, 95, 95, 32) 0 ['block2b_project_bn[0][0]']
block2b_add (Add) (None, 95, 95, 32) 0 ['block2b_drop[0][0]',
'block2a_project_bn[0][0]']
block2c_expand_conv (Conv2D) (None, 95, 95, 192) 6144 ['block2b_add[0][0]']
block2c_expand_bn (BatchNormal (None, 95, 95, 192) 768 ['block2c_expand_conv[0][0]']
ization)
block2c_expand_activation (Act (None, 95, 95, 192) 0 ['block2c_expand_bn[0][0]']
ivation)
block2c_dwconv (DepthwiseConv2 (None, 95, 95, 192) 1728 ['block2c_expand_activation[0][0]
D) ']
block2c_bn (BatchNormalization (None, 95, 95, 192) 768 ['block2c_dwconv[0][0]']
)
block2c_activation (Activation (None, 95, 95, 192) 0 ['block2c_bn[0][0]']
)
block2c_se_squeeze (GlobalAver (None, 192) 0 ['block2c_activation[0][0]']
agePooling2D)
block2c_se_reshape (Reshape) (None, 1, 1, 192) 0 ['block2c_se_squeeze[0][0]']
block2c_se_reduce (Conv2D) (None, 1, 1, 8) 1544 ['block2c_se_reshape[0][0]']
block2c_se_expand (Conv2D) (None, 1, 1, 192) 1728 ['block2c_se_reduce[0][0]']
block2c_se_excite (Multiply) (None, 95, 95, 192) 0 ['block2c_activation[0][0]',
'block2c_se_expand[0][0]']
block2c_project_conv (Conv2D) (None, 95, 95, 32) 6144 ['block2c_se_excite[0][0]']
block2c_project_bn (BatchNorma (None, 95, 95, 32) 128 ['block2c_project_conv[0][0]']
lization)
block2c_drop (FixedDropout) (None, 95, 95, 32) 0 ['block2c_project_bn[0][0]']
block2c_add (Add) (None, 95, 95, 32) 0 ['block2c_drop[0][0]',
'block2b_add[0][0]']
block2d_expand_conv (Conv2D) (None, 95, 95, 192) 6144 ['block2c_add[0][0]']
block2d_expand_bn (BatchNormal (None, 95, 95, 192) 768 ['block2d_expand_conv[0][0]']
ization)
block2d_expand_activation (Act (None, 95, 95, 192) 0 ['block2d_expand_bn[0][0]']
ivation)
block2d_dwconv (DepthwiseConv2 (None, 95, 95, 192) 1728 ['block2d_expand_activation[0][0]
D) ']
block2d_bn (BatchNormalization (None, 95, 95, 192) 768 ['block2d_dwconv[0][0]']
)
block2d_activation (Activation (None, 95, 95, 192) 0 ['block2d_bn[0][0]']
)
block2d_se_squeeze (GlobalAver (None, 192) 0 ['block2d_activation[0][0]']
agePooling2D)
block2d_se_reshape (Reshape) (None, 1, 1, 192) 0 ['block2d_se_squeeze[0][0]']
block2d_se_reduce (Conv2D) (None, 1, 1, 8) 1544 ['block2d_se_reshape[0][0]']
block2d_se_expand (Conv2D) (None, 1, 1, 192) 1728 ['block2d_se_reduce[0][0]']
block2d_se_excite (Multiply) (None, 95, 95, 192) 0 ['block2d_activation[0][0]',
'block2d_se_expand[0][0]']
block2d_project_conv (Conv2D) (None, 95, 95, 32) 6144 ['block2d_se_excite[0][0]']
block2d_project_bn (BatchNorma (None, 95, 95, 32) 128 ['block2d_project_conv[0][0]']
lization)
block2d_drop (FixedDropout) (None, 95, 95, 32) 0 ['block2d_project_bn[0][0]']
block2d_add (Add) (None, 95, 95, 32) 0 ['block2d_drop[0][0]',
'block2c_add[0][0]']
block3a_expand_conv (Conv2D) (None, 95, 95, 192) 6144 ['block2d_add[0][0]']
block3a_expand_bn (BatchNormal (None, 95, 95, 192) 768 ['block3a_expand_conv[0][0]']
ization)
block3a_expand_activation (Act (None, 95, 95, 192) 0 ['block3a_expand_bn[0][0]']
ivation)
block3a_dwconv (DepthwiseConv2 (None, 48, 48, 192) 4800 ['block3a_expand_activation[0][0]
D) ']
block3a_bn (BatchNormalization (None, 48, 48, 192) 768 ['block3a_dwconv[0][0]']
)
block3a_activation (Activation (None, 48, 48, 192) 0 ['block3a_bn[0][0]']
)
block3a_se_squeeze (GlobalAver (None, 192) 0 ['block3a_activation[0][0]']
agePooling2D)
block3a_se_reshape (Reshape) (None, 1, 1, 192) 0 ['block3a_se_squeeze[0][0]']
block3a_se_reduce (Conv2D) (None, 1, 1, 8) 1544 ['block3a_se_reshape[0][0]']
block3a_se_expand (Conv2D) (None, 1, 1, 192) 1728 ['block3a_se_reduce[0][0]']
block3a_se_excite (Multiply) (None, 48, 48, 192) 0 ['block3a_activation[0][0]',
'block3a_se_expand[0][0]']
block3a_project_conv (Conv2D) (None, 48, 48, 56) 10752 ['block3a_se_excite[0][0]']
block3a_project_bn (BatchNorma (None, 48, 48, 56) 224 ['block3a_project_conv[0][0]']
lization)
block3b_expand_conv (Conv2D) (None, 48, 48, 336) 18816 ['block3a_project_bn[0][0]']
block3b_expand_bn (BatchNormal (None, 48, 48, 336) 1344 ['block3b_expand_conv[0][0]']
ization)
block3b_expand_activation (Act (None, 48, 48, 336) 0 ['block3b_expand_bn[0][0]']
ivation)
block3b_dwconv (DepthwiseConv2 (None, 48, 48, 336) 8400 ['block3b_expand_activation[0][0]
D) ']
block3b_bn (BatchNormalization (None, 48, 48, 336) 1344 ['block3b_dwconv[0][0]']
)
block3b_activation (Activation (None, 48, 48, 336) 0 ['block3b_bn[0][0]']
)
block3b_se_squeeze (GlobalAver (None, 336) 0 ['block3b_activation[0][0]']
agePooling2D)
block3b_se_reshape (Reshape) (None, 1, 1, 336) 0 ['block3b_se_squeeze[0][0]']
block3b_se_reduce (Conv2D) (None, 1, 1, 14) 4718 ['block3b_se_reshape[0][0]']
block3b_se_expand (Conv2D) (None, 1, 1, 336) 5040 ['block3b_se_reduce[0][0]']
block3b_se_excite (Multiply) (None, 48, 48, 336) 0 ['block3b_activation[0][0]',
'block3b_se_expand[0][0]']
block3b_project_conv (Conv2D) (None, 48, 48, 56) 18816 ['block3b_se_excite[0][0]']
block3b_project_bn (BatchNorma (None, 48, 48, 56) 224 ['block3b_project_conv[0][0]']
lization)
block3b_drop (FixedDropout) (None, 48, 48, 56) 0 ['block3b_project_bn[0][0]']
block3b_add (Add) (None, 48, 48, 56) 0 ['block3b_drop[0][0]',
'block3a_project_bn[0][0]']
block3c_expand_conv (Conv2D) (None, 48, 48, 336) 18816 ['block3b_add[0][0]']
block3c_expand_bn (BatchNormal (None, 48, 48, 336) 1344 ['block3c_expand_conv[0][0]']
ization)
block3c_expand_activation (Act (None, 48, 48, 336) 0 ['block3c_expand_bn[0][0]']
ivation)
block3c_dwconv (DepthwiseConv2 (None, 48, 48, 336) 8400 ['block3c_expand_activation[0][0]
D) ']
block3c_bn (BatchNormalization (None, 48, 48, 336) 1344 ['block3c_dwconv[0][0]']
)
block3c_activation (Activation (None, 48, 48, 336) 0 ['block3c_bn[0][0]']
)
block3c_se_squeeze (GlobalAver (None, 336) 0 ['block3c_activation[0][0]']
agePooling2D)
block3c_se_reshape (Reshape) (None, 1, 1, 336) 0 ['block3c_se_squeeze[0][0]']
block3c_se_reduce (Conv2D) (None, 1, 1, 14) 4718 ['block3c_se_reshape[0][0]']
block3c_se_expand (Conv2D) (None, 1, 1, 336) 5040 ['block3c_se_reduce[0][0]']
block3c_se_excite (Multiply) (None, 48, 48, 336) 0 ['block3c_activation[0][0]',
'block3c_se_expand[0][0]']
block3c_project_conv (Conv2D) (None, 48, 48, 56) 18816 ['block3c_se_excite[0][0]']
block3c_project_bn (BatchNorma (None, 48, 48, 56) 224 ['block3c_project_conv[0][0]']
lization)
block3c_drop (FixedDropout) (None, 48, 48, 56) 0 ['block3c_project_bn[0][0]']
block3c_add (Add) (None, 48, 48, 56) 0 ['block3c_drop[0][0]',
'block3b_add[0][0]']
block3d_expand_conv (Conv2D) (None, 48, 48, 336) 18816 ['block3c_add[0][0]']
block3d_expand_bn (BatchNormal (None, 48, 48, 336) 1344 ['block3d_expand_conv[0][0]']
ization)
block3d_expand_activation (Act (None, 48, 48, 336) 0 ['block3d_expand_bn[0][0]']
ivation)
block3d_dwconv (DepthwiseConv2 (None, 48, 48, 336) 8400 ['block3d_expand_activation[0][0]
D) ']
block3d_bn (BatchNormalization (None, 48, 48, 336) 1344 ['block3d_dwconv[0][0]']
)
block3d_activation (Activation (None, 48, 48, 336) 0 ['block3d_bn[0][0]']
)
block3d_se_squeeze (GlobalAver (None, 336) 0 ['block3d_activation[0][0]']
agePooling2D)
block3d_se_reshape (Reshape) (None, 1, 1, 336) 0 ['block3d_se_squeeze[0][0]']
block3d_se_reduce (Conv2D) (None, 1, 1, 14) 4718 ['block3d_se_reshape[0][0]']
block3d_se_expand (Conv2D) (None, 1, 1, 336) 5040 ['block3d_se_reduce[0][0]']
block3d_se_excite (Multiply) (None, 48, 48, 336) 0 ['block3d_activation[0][0]',
'block3d_se_expand[0][0]']
block3d_project_conv (Conv2D) (None, 48, 48, 56) 18816 ['block3d_se_excite[0][0]']
block3d_project_bn (BatchNorma (None, 48, 48, 56) 224 ['block3d_project_conv[0][0]']
lization)
block3d_drop (FixedDropout) (None, 48, 48, 56) 0 ['block3d_project_bn[0][0]']
block3d_add (Add) (None, 48, 48, 56) 0 ['block3d_drop[0][0]',
'block3c_add[0][0]']
block4a_expand_conv (Conv2D) (None, 48, 48, 336) 18816 ['block3d_add[0][0]']
block4a_expand_bn (BatchNormal (None, 48, 48, 336) 1344 ['block4a_expand_conv[0][0]']
ization)
block4a_expand_activation (Act (None, 48, 48, 336) 0 ['block4a_expand_bn[0][0]']
ivation)
block4a_dwconv (DepthwiseConv2 (None, 24, 24, 336) 3024 ['block4a_expand_activation[0][0]
D) ']
block4a_bn (BatchNormalization (None, 24, 24, 336) 1344 ['block4a_dwconv[0][0]']
)
block4a_activation (Activation (None, 24, 24, 336) 0 ['block4a_bn[0][0]']
)
block4a_se_squeeze (GlobalAver (None, 336) 0 ['block4a_activation[0][0]']
agePooling2D)
block4a_se_reshape (Reshape) (None, 1, 1, 336) 0 ['block4a_se_squeeze[0][0]']
block4a_se_reduce (Conv2D) (None, 1, 1, 14) 4718 ['block4a_se_reshape[0][0]']
block4a_se_expand (Conv2D) (None, 1, 1, 336) 5040 ['block4a_se_reduce[0][0]']
block4a_se_excite (Multiply) (None, 24, 24, 336) 0 ['block4a_activation[0][0]',
'block4a_se_expand[0][0]']
block4a_project_conv (Conv2D) (None, 24, 24, 112) 37632 ['block4a_se_excite[0][0]']
block4a_project_bn (BatchNorma (None, 24, 24, 112) 448 ['block4a_project_conv[0][0]']
lization)
block4b_expand_conv (Conv2D) (None, 24, 24, 672) 75264 ['block4a_project_bn[0][0]']
block4b_expand_bn (BatchNormal (None, 24, 24, 672) 2688 ['block4b_expand_conv[0][0]']
ization)
block4b_expand_activation (Act (None, 24, 24, 672) 0 ['block4b_expand_bn[0][0]']
ivation)
block4b_dwconv (DepthwiseConv2 (None, 24, 24, 672) 6048 ['block4b_expand_activation[0][0]
D) ']
block4b_bn (BatchNormalization (None, 24, 24, 672) 2688 ['block4b_dwconv[0][0]']
)
block4b_activation (Activation (None, 24, 24, 672) 0 ['block4b_bn[0][0]']
)
block4b_se_squeeze (GlobalAver (None, 672) 0 ['block4b_activation[0][0]']
agePooling2D)
block4b_se_reshape (Reshape) (None, 1, 1, 672) 0 ['block4b_se_squeeze[0][0]']
block4b_se_reduce (Conv2D) (None, 1, 1, 28) 18844 ['block4b_se_reshape[0][0]']
block4b_se_expand (Conv2D) (None, 1, 1, 672) 19488 ['block4b_se_reduce[0][0]']
block4b_se_excite (Multiply) (None, 24, 24, 672) 0 ['block4b_activation[0][0]',
'block4b_se_expand[0][0]']
block4b_project_conv (Conv2D) (None, 24, 24, 112) 75264 ['block4b_se_excite[0][0]']
block4b_project_bn (BatchNorma (None, 24, 24, 112) 448 ['block4b_project_conv[0][0]']
lization)
block4b_drop (FixedDropout) (None, 24, 24, 112) 0 ['block4b_project_bn[0][0]']
block4b_add (Add) (None, 24, 24, 112) 0 ['block4b_drop[0][0]',
'block4a_project_bn[0][0]']
block4c_expand_conv (Conv2D) (None, 24, 24, 672) 75264 ['block4b_add[0][0]']
block4c_expand_bn (BatchNormal (None, 24, 24, 672) 2688 ['block4c_expand_conv[0][0]']
ization)
block4c_expand_activation (Act (None, 24, 24, 672) 0 ['block4c_expand_bn[0][0]']
ivation)
block4c_dwconv (DepthwiseConv2 (None, 24, 24, 672) 6048 ['block4c_expand_activation[0][0]
D) ']
block4c_bn (BatchNormalization (None, 24, 24, 672) 2688 ['block4c_dwconv[0][0]']
)
block4c_activation (Activation (None, 24, 24, 672) 0 ['block4c_bn[0][0]']
)
block4c_se_squeeze (GlobalAver (None, 672) 0 ['block4c_activation[0][0]']
agePooling2D)
block4c_se_reshape (Reshape) (None, 1, 1, 672) 0 ['block4c_se_squeeze[0][0]']
block4c_se_reduce (Conv2D) (None, 1, 1, 28) 18844 ['block4c_se_reshape[0][0]']
block4c_se_expand (Conv2D) (None, 1, 1, 672) 19488 ['block4c_se_reduce[0][0]']
block4c_se_excite (Multiply) (None, 24, 24, 672) 0 ['block4c_activation[0][0]',
'block4c_se_expand[0][0]']
block4c_project_conv (Conv2D) (None, 24, 24, 112) 75264 ['block4c_se_excite[0][0]']
block4c_project_bn (BatchNorma (None, 24, 24, 112) 448 ['block4c_project_conv[0][0]']
lization)
block4c_drop (FixedDropout) (None, 24, 24, 112) 0 ['block4c_project_bn[0][0]']
block4c_add (Add) (None, 24, 24, 112) 0 ['block4c_drop[0][0]',
'block4b_add[0][0]']
block4d_expand_conv (Conv2D) (None, 24, 24, 672) 75264 ['block4c_add[0][0]']
block4d_expand_bn (BatchNormal (None, 24, 24, 672) 2688 ['block4d_expand_conv[0][0]']
ization)
block4d_expand_activation (Act (None, 24, 24, 672) 0 ['block4d_expand_bn[0][0]']
ivation)
block4d_dwconv (DepthwiseConv2 (None, 24, 24, 672) 6048 ['block4d_expand_activation[0][0]
D) ']
block4d_bn (BatchNormalization (None, 24, 24, 672) 2688 ['block4d_dwconv[0][0]']
)
block4d_activation (Activation (None, 24, 24, 672) 0 ['block4d_bn[0][0]']
)
block4d_se_squeeze (GlobalAver (None, 672) 0 ['block4d_activation[0][0]']
agePooling2D)
block4d_se_reshape (Reshape) (None, 1, 1, 672) 0 ['block4d_se_squeeze[0][0]']
block4d_se_reduce (Conv2D) (None, 1, 1, 28) 18844 ['block4d_se_reshape[0][0]']
block4d_se_expand (Conv2D) (None, 1, 1, 672) 19488 ['block4d_se_reduce[0][0]']
block4d_se_excite (Multiply) (None, 24, 24, 672) 0 ['block4d_activation[0][0]',
'block4d_se_expand[0][0]']
block4d_project_conv (Conv2D) (None, 24, 24, 112) 75264 ['block4d_se_excite[0][0]']
block4d_project_bn (BatchNorma (None, 24, 24, 112) 448 ['block4d_project_conv[0][0]']
lization)
block4d_drop (FixedDropout) (None, 24, 24, 112) 0 ['block4d_project_bn[0][0]']
block4d_add (Add) (None, 24, 24, 112) 0 ['block4d_drop[0][0]',
'block4c_add[0][0]']
block4e_expand_conv (Conv2D) (None, 24, 24, 672) 75264 ['block4d_add[0][0]']
block4e_expand_bn (BatchNormal (None, 24, 24, 672) 2688 ['block4e_expand_conv[0][0]']
ization)
block4e_expand_activation (Act (None, 24, 24, 672) 0 ['block4e_expand_bn[0][0]']
ivation)
block4e_dwconv (DepthwiseConv2 (None, 24, 24, 672) 6048 ['block4e_expand_activation[0][0]
D) ']
block4e_bn (BatchNormalization (None, 24, 24, 672) 2688 ['block4e_dwconv[0][0]']
)
block4e_activation (Activation (None, 24, 24, 672) 0 ['block4e_bn[0][0]']
)
block4e_se_squeeze (GlobalAver (None, 672) 0 ['block4e_activation[0][0]']
agePooling2D)
block4e_se_reshape (Reshape) (None, 1, 1, 672) 0 ['block4e_se_squeeze[0][0]']
block4e_se_reduce (Conv2D) (None, 1, 1, 28) 18844 ['block4e_se_reshape[0][0]']
block4e_se_expand (Conv2D) (None, 1, 1, 672) 19488 ['block4e_se_reduce[0][0]']
block4e_se_excite (Multiply) (None, 24, 24, 672) 0 ['block4e_activation[0][0]',
'block4e_se_expand[0][0]']
block4e_project_conv (Conv2D) (None, 24, 24, 112) 75264 ['block4e_se_excite[0][0]']
block4e_project_bn (BatchNorma (None, 24, 24, 112) 448 ['block4e_project_conv[0][0]']
lization)
block4e_drop (FixedDropout) (None, 24, 24, 112) 0 ['block4e_project_bn[0][0]']
block4e_add (Add) (None, 24, 24, 112) 0 ['block4e_drop[0][0]',
'block4d_add[0][0]']
block4f_expand_conv (Conv2D) (None, 24, 24, 672) 75264 ['block4e_add[0][0]']
block4f_expand_bn (BatchNormal (None, 24, 24, 672) 2688 ['block4f_expand_conv[0][0]']
ization)
block4f_expand_activation (Act (None, 24, 24, 672) 0 ['block4f_expand_bn[0][0]']
ivation)
block4f_dwconv (DepthwiseConv2 (None, 24, 24, 672) 6048 ['block4f_expand_activation[0][0]
D) ']
block4f_bn (BatchNormalization (None, 24, 24, 672) 2688 ['block4f_dwconv[0][0]']
)
block4f_activation (Activation (None, 24, 24, 672) 0 ['block4f_bn[0][0]']
)
block4f_se_squeeze (GlobalAver (None, 672) 0 ['block4f_activation[0][0]']
agePooling2D)
block4f_se_reshape (Reshape) (None, 1, 1, 672) 0 ['block4f_se_squeeze[0][0]']
block4f_se_reduce (Conv2D) (None, 1, 1, 28) 18844 ['block4f_se_reshape[0][0]']
block4f_se_expand (Conv2D) (None, 1, 1, 672) 19488 ['block4f_se_reduce[0][0]']
block4f_se_excite (Multiply) (None, 24, 24, 672) 0 ['block4f_activation[0][0]',
'block4f_se_expand[0][0]']
block4f_project_conv (Conv2D) (None, 24, 24, 112) 75264 ['block4f_se_excite[0][0]']
block4f_project_bn (BatchNorma (None, 24, 24, 112) 448 ['block4f_project_conv[0][0]']
lization)
block4f_drop (FixedDropout) (None, 24, 24, 112) 0 ['block4f_project_bn[0][0]']
block4f_add (Add) (None, 24, 24, 112) 0 ['block4f_drop[0][0]',
'block4e_add[0][0]']
block5a_expand_conv (Conv2D) (None, 24, 24, 672) 75264 ['block4f_add[0][0]']
block5a_expand_bn (BatchNormal (None, 24, 24, 672) 2688 ['block5a_expand_conv[0][0]']
ization)
block5a_expand_activation (Act (None, 24, 24, 672) 0 ['block5a_expand_bn[0][0]']
ivation)
block5a_dwconv (DepthwiseConv2 (None, 24, 24, 672) 16800 ['block5a_expand_activation[0][0]
D) ']
block5a_bn (BatchNormalization (None, 24, 24, 672) 2688 ['block5a_dwconv[0][0]']
)
block5a_activation (Activation (None, 24, 24, 672) 0 ['block5a_bn[0][0]']
)
block5a_se_squeeze (GlobalAver (None, 672) 0 ['block5a_activation[0][0]']
agePooling2D)
block5a_se_reshape (Reshape) (None, 1, 1, 672) 0 ['block5a_se_squeeze[0][0]']
block5a_se_reduce (Conv2D) (None, 1, 1, 28) 18844 ['block5a_se_reshape[0][0]']
block5a_se_expand (Conv2D) (None, 1, 1, 672) 19488 ['block5a_se_reduce[0][0]']
block5a_se_excite (Multiply) (None, 24, 24, 672) 0 ['block5a_activation[0][0]',
'block5a_se_expand[0][0]']
block5a_project_conv (Conv2D) (None, 24, 24, 160) 107520 ['block5a_se_excite[0][0]']
block5a_project_bn (BatchNorma (None, 24, 24, 160) 640 ['block5a_project_conv[0][0]']
lization)
block5b_expand_conv (Conv2D) (None, 24, 24, 960) 153600 ['block5a_project_bn[0][0]']
block5b_expand_bn (BatchNormal (None, 24, 24, 960) 3840 ['block5b_expand_conv[0][0]']
ization)
block5b_expand_activation (Act (None, 24, 24, 960) 0 ['block5b_expand_bn[0][0]']
ivation)
block5b_dwconv (DepthwiseConv2 (None, 24, 24, 960) 24000 ['block5b_expand_activation[0][0]
D) ']
block5b_bn (BatchNormalization (None, 24, 24, 960) 3840 ['block5b_dwconv[0][0]']
)
block5b_activation (Activation (None, 24, 24, 960) 0 ['block5b_bn[0][0]']
)
block5b_se_squeeze (GlobalAver (None, 960) 0 ['block5b_activation[0][0]']
agePooling2D)
block5b_se_reshape (Reshape) (None, 1, 1, 960) 0 ['block5b_se_squeeze[0][0]']
block5b_se_reduce (Conv2D) (None, 1, 1, 40) 38440 ['block5b_se_reshape[0][0]']
block5b_se_expand (Conv2D) (None, 1, 1, 960) 39360 ['block5b_se_reduce[0][0]']
block5b_se_excite (Multiply) (None, 24, 24, 960) 0 ['block5b_activation[0][0]',
'block5b_se_expand[0][0]']
block5b_project_conv (Conv2D) (None, 24, 24, 160) 153600 ['block5b_se_excite[0][0]']
block5b_project_bn (BatchNorma (None, 24, 24, 160) 640 ['block5b_project_conv[0][0]']
lization)
block5b_drop (FixedDropout) (None, 24, 24, 160) 0 ['block5b_project_bn[0][0]']
block5b_add (Add) (None, 24, 24, 160) 0 ['block5b_drop[0][0]',
'block5a_project_bn[0][0]']
block5c_expand_conv (Conv2D) (None, 24, 24, 960) 153600 ['block5b_add[0][0]']
block5c_expand_bn (BatchNormal (None, 24, 24, 960) 3840 ['block5c_expand_conv[0][0]']
ization)
block5c_expand_activation (Act (None, 24, 24, 960) 0 ['block5c_expand_bn[0][0]']
ivation)
block5c_dwconv (DepthwiseConv2 (None, 24, 24, 960) 24000 ['block5c_expand_activation[0][0]
D) ']
block5c_bn (BatchNormalization (None, 24, 24, 960) 3840 ['block5c_dwconv[0][0]']
)
block5c_activation (Activation (None, 24, 24, 960) 0 ['block5c_bn[0][0]']
)
block5c_se_squeeze (GlobalAver (None, 960) 0 ['block5c_activation[0][0]']
agePooling2D)
block5c_se_reshape (Reshape) (None, 1, 1, 960) 0 ['block5c_se_squeeze[0][0]']
block5c_se_reduce (Conv2D) (None, 1, 1, 40) 38440 ['block5c_se_reshape[0][0]']
block5c_se_expand (Conv2D) (None, 1, 1, 960) 39360 ['block5c_se_reduce[0][0]']
block5c_se_excite (Multiply) (None, 24, 24, 960) 0 ['block5c_activation[0][0]',
'block5c_se_expand[0][0]']
block5c_project_conv (Conv2D) (None, 24, 24, 160) 153600 ['block5c_se_excite[0][0]']
block5c_project_bn (BatchNorma (None, 24, 24, 160) 640 ['block5c_project_conv[0][0]']
lization)
block5c_drop (FixedDropout) (None, 24, 24, 160) 0 ['block5c_project_bn[0][0]']
block5c_add (Add) (None, 24, 24, 160) 0 ['block5c_drop[0][0]',
'block5b_add[0][0]']
block5d_expand_conv (Conv2D) (None, 24, 24, 960) 153600 ['block5c_add[0][0]']
block5d_expand_bn (BatchNormal (None, 24, 24, 960) 3840 ['block5d_expand_conv[0][0]']
ization)
block5d_expand_activation (Act (None, 24, 24, 960) 0 ['block5d_expand_bn[0][0]']
ivation)
block5d_dwconv (DepthwiseConv2 (None, 24, 24, 960) 24000 ['block5d_expand_activation[0][0]
D) ']
block5d_bn (BatchNormalization (None, 24, 24, 960) 3840 ['block5d_dwconv[0][0]']
)
block5d_activation (Activation (None, 24, 24, 960) 0 ['block5d_bn[0][0]']
)
block5d_se_squeeze (GlobalAver (None, 960) 0 ['block5d_activation[0][0]']
agePooling2D)
block5d_se_reshape (Reshape) (None, 1, 1, 960) 0 ['block5d_se_squeeze[0][0]']
block5d_se_reduce (Conv2D) (None, 1, 1, 40) 38440 ['block5d_se_reshape[0][0]']
block5d_se_expand (Conv2D) (None, 1, 1, 960) 39360 ['block5d_se_reduce[0][0]']
block5d_se_excite (Multiply) (None, 24, 24, 960) 0 ['block5d_activation[0][0]',
'block5d_se_expand[0][0]']
block5d_project_conv (Conv2D) (None, 24, 24, 160) 153600 ['block5d_se_excite[0][0]']
block5d_project_bn (BatchNorma (None, 24, 24, 160) 640 ['block5d_project_conv[0][0]']
lization)
block5d_drop (FixedDropout) (None, 24, 24, 160) 0 ['block5d_project_bn[0][0]']
block5d_add (Add) (None, 24, 24, 160) 0 ['block5d_drop[0][0]',
'block5c_add[0][0]']
block5e_expand_conv (Conv2D) (None, 24, 24, 960) 153600 ['block5d_add[0][0]']
block5e_expand_bn (BatchNormal (None, 24, 24, 960) 3840 ['block5e_expand_conv[0][0]']
ization)
block5e_expand_activation (Act (None, 24, 24, 960) 0 ['block5e_expand_bn[0][0]']
ivation)
block5e_dwconv (DepthwiseConv2 (None, 24, 24, 960) 24000 ['block5e_expand_activation[0][0]
D) ']
block5e_bn (BatchNormalization (None, 24, 24, 960) 3840 ['block5e_dwconv[0][0]']
)
block5e_activation (Activation (None, 24, 24, 960) 0 ['block5e_bn[0][0]']
)
block5e_se_squeeze (GlobalAver (None, 960) 0 ['block5e_activation[0][0]']
agePooling2D)
block5e_se_reshape (Reshape) (None, 1, 1, 960) 0 ['block5e_se_squeeze[0][0]']
block5e_se_reduce (Conv2D) (None, 1, 1, 40) 38440 ['block5e_se_reshape[0][0]']
block5e_se_expand (Conv2D) (None, 1, 1, 960) 39360 ['block5e_se_reduce[0][0]']
block5e_se_excite (Multiply) (None, 24, 24, 960) 0 ['block5e_activation[0][0]',
'block5e_se_expand[0][0]']
block5e_project_conv (Conv2D) (None, 24, 24, 160) 153600 ['block5e_se_excite[0][0]']
block5e_project_bn (BatchNorma (None, 24, 24, 160) 640 ['block5e_project_conv[0][0]']
lization)
block5e_drop (FixedDropout) (None, 24, 24, 160) 0 ['block5e_project_bn[0][0]']
block5e_add (Add) (None, 24, 24, 160) 0 ['block5e_drop[0][0]',
'block5d_add[0][0]']
block5f_expand_conv (Conv2D) (None, 24, 24, 960) 153600 ['block5e_add[0][0]']
block5f_expand_bn (BatchNormal (None, 24, 24, 960) 3840 ['block5f_expand_conv[0][0]']
ization)
block5f_expand_activation (Act (None, 24, 24, 960) 0 ['block5f_expand_bn[0][0]']
ivation)
block5f_dwconv (DepthwiseConv2 (None, 24, 24, 960) 24000 ['block5f_expand_activation[0][0]
D) ']
block5f_bn (BatchNormalization (None, 24, 24, 960) 3840 ['block5f_dwconv[0][0]']
)
block5f_activation (Activation (None, 24, 24, 960) 0 ['block5f_bn[0][0]']
)
block5f_se_squeeze (GlobalAver (None, 960) 0 ['block5f_activation[0][0]']
agePooling2D)
block5f_se_reshape (Reshape) (None, 1, 1, 960) 0 ['block5f_se_squeeze[0][0]']
block5f_se_reduce (Conv2D) (None, 1, 1, 40) 38440 ['block5f_se_reshape[0][0]']
block5f_se_expand (Conv2D) (None, 1, 1, 960) 39360 ['block5f_se_reduce[0][0]']
block5f_se_excite (Multiply) (None, 24, 24, 960) 0 ['block5f_activation[0][0]',
'block5f_se_expand[0][0]']
block5f_project_conv (Conv2D) (None, 24, 24, 160) 153600 ['block5f_se_excite[0][0]']
block5f_project_bn (BatchNorma (None, 24, 24, 160) 640 ['block5f_project_conv[0][0]']
lization)
block5f_drop (FixedDropout) (None, 24, 24, 160) 0 ['block5f_project_bn[0][0]']
block5f_add (Add) (None, 24, 24, 160) 0 ['block5f_drop[0][0]',
'block5e_add[0][0]']
block6a_expand_conv (Conv2D) (None, 24, 24, 960) 153600 ['block5f_add[0][0]']
block6a_expand_bn (BatchNormal (None, 24, 24, 960) 3840 ['block6a_expand_conv[0][0]']
ization)
block6a_expand_activation (Act (None, 24, 24, 960) 0 ['block6a_expand_bn[0][0]']
ivation)
block6a_dwconv (DepthwiseConv2 (None, 12, 12, 960) 24000 ['block6a_expand_activation[0][0]
D) ']
block6a_bn (BatchNormalization (None, 12, 12, 960) 3840 ['block6a_dwconv[0][0]']
)
block6a_activation (Activation (None, 12, 12, 960) 0 ['block6a_bn[0][0]']
)
block6a_se_squeeze (GlobalAver (None, 960) 0 ['block6a_activation[0][0]']
agePooling2D)
block6a_se_reshape (Reshape) (None, 1, 1, 960) 0 ['block6a_se_squeeze[0][0]']
block6a_se_reduce (Conv2D) (None, 1, 1, 40) 38440 ['block6a_se_reshape[0][0]']
block6a_se_expand (Conv2D) (None, 1, 1, 960) 39360 ['block6a_se_reduce[0][0]']
block6a_se_excite (Multiply) (None, 12, 12, 960) 0 ['block6a_activation[0][0]',
'block6a_se_expand[0][0]']
block6a_project_conv (Conv2D) (None, 12, 12, 272) 261120 ['block6a_se_excite[0][0]']
block6a_project_bn (BatchNorma (None, 12, 12, 272) 1088 ['block6a_project_conv[0][0]']
lization)
block6b_expand_conv (Conv2D) (None, 12, 12, 1632 443904 ['block6a_project_bn[0][0]']
)
block6b_expand_bn (BatchNormal (None, 12, 12, 1632 6528 ['block6b_expand_conv[0][0]']
ization) )
block6b_expand_activation (Act (None, 12, 12, 1632 0 ['block6b_expand_bn[0][0]']
ivation) )
block6b_dwconv (DepthwiseConv2 (None, 12, 12, 1632 40800 ['block6b_expand_activation[0][0]
D) ) ']
block6b_bn (BatchNormalization (None, 12, 12, 1632 6528 ['block6b_dwconv[0][0]']
) )
block6b_activation (Activation (None, 12, 12, 1632 0 ['block6b_bn[0][0]']
) )
block6b_se_squeeze (GlobalAver (None, 1632) 0 ['block6b_activation[0][0]']
agePooling2D)
block6b_se_reshape (Reshape) (None, 1, 1, 1632) 0 ['block6b_se_squeeze[0][0]']
block6b_se_reduce (Conv2D) (None, 1, 1, 68) 111044 ['block6b_se_reshape[0][0]']
block6b_se_expand (Conv2D) (None, 1, 1, 1632) 112608 ['block6b_se_reduce[0][0]']
block6b_se_excite (Multiply) (None, 12, 12, 1632 0 ['block6b_activation[0][0]',
) 'block6b_se_expand[0][0]']
block6b_project_conv (Conv2D) (None, 12, 12, 272) 443904 ['block6b_se_excite[0][0]']
block6b_project_bn (BatchNorma (None, 12, 12, 272) 1088 ['block6b_project_conv[0][0]']
lization)
block6b_drop (FixedDropout) (None, 12, 12, 272) 0 ['block6b_project_bn[0][0]']
block6b_add (Add) (None, 12, 12, 272) 0 ['block6b_drop[0][0]',
'block6a_project_bn[0][0]']
block6c_expand_conv (Conv2D) (None, 12, 12, 1632 443904 ['block6b_add[0][0]']
)
block6c_expand_bn (BatchNormal (None, 12, 12, 1632 6528 ['block6c_expand_conv[0][0]']
ization) )
block6c_expand_activation (Act (None, 12, 12, 1632 0 ['block6c_expand_bn[0][0]']
ivation) )
block6c_dwconv (DepthwiseConv2 (None, 12, 12, 1632 40800 ['block6c_expand_activation[0][0]
D) ) ']
block6c_bn (BatchNormalization (None, 12, 12, 1632 6528 ['block6c_dwconv[0][0]']
) )
block6c_activation (Activation (None, 12, 12, 1632 0 ['block6c_bn[0][0]']
) )
block6c_se_squeeze (GlobalAver (None, 1632) 0 ['block6c_activation[0][0]']
agePooling2D)
block6c_se_reshape (Reshape) (None, 1, 1, 1632) 0 ['block6c_se_squeeze[0][0]']
block6c_se_reduce (Conv2D) (None, 1, 1, 68) 111044 ['block6c_se_reshape[0][0]']
block6c_se_expand (Conv2D) (None, 1, 1, 1632) 112608 ['block6c_se_reduce[0][0]']
block6c_se_excite (Multiply) (None, 12, 12, 1632 0 ['block6c_activation[0][0]',
) 'block6c_se_expand[0][0]']
block6c_project_conv (Conv2D) (None, 12, 12, 272) 443904 ['block6c_se_excite[0][0]']
block6c_project_bn (BatchNorma (None, 12, 12, 272) 1088 ['block6c_project_conv[0][0]']
lization)
block6c_drop (FixedDropout) (None, 12, 12, 272) 0 ['block6c_project_bn[0][0]']
block6c_add (Add) (None, 12, 12, 272) 0 ['block6c_drop[0][0]',
'block6b_add[0][0]']
block6d_expand_conv (Conv2D) (None, 12, 12, 1632 443904 ['block6c_add[0][0]']
)
block6d_expand_bn (BatchNormal (None, 12, 12, 1632 6528 ['block6d_expand_conv[0][0]']
ization) )
block6d_expand_activation (Act (None, 12, 12, 1632 0 ['block6d_expand_bn[0][0]']
ivation) )
block6d_dwconv (DepthwiseConv2 (None, 12, 12, 1632 40800 ['block6d_expand_activation[0][0]
D) ) ']
block6d_bn (BatchNormalization (None, 12, 12, 1632 6528 ['block6d_dwconv[0][0]']
) )
block6d_activation (Activation (None, 12, 12, 1632 0 ['block6d_bn[0][0]']
) )
block6d_se_squeeze (GlobalAver (None, 1632) 0 ['block6d_activation[0][0]']
agePooling2D)
block6d_se_reshape (Reshape) (None, 1, 1, 1632) 0 ['block6d_se_squeeze[0][0]']
block6d_se_reduce (Conv2D) (None, 1, 1, 68) 111044 ['block6d_se_reshape[0][0]']
block6d_se_expand (Conv2D) (None, 1, 1, 1632) 112608 ['block6d_se_reduce[0][0]']
block6d_se_excite (Multiply) (None, 12, 12, 1632 0 ['block6d_activation[0][0]',
) 'block6d_se_expand[0][0]']
block6d_project_conv (Conv2D) (None, 12, 12, 272) 443904 ['block6d_se_excite[0][0]']
block6d_project_bn (BatchNorma (None, 12, 12, 272) 1088 ['block6d_project_conv[0][0]']
lization)
block6d_drop (FixedDropout) (None, 12, 12, 272) 0 ['block6d_project_bn[0][0]']
block6d_add (Add) (None, 12, 12, 272) 0 ['block6d_drop[0][0]',
'block6c_add[0][0]']
block6e_expand_conv (Conv2D) (None, 12, 12, 1632 443904 ['block6d_add[0][0]']
)
block6e_expand_bn (BatchNormal (None, 12, 12, 1632 6528 ['block6e_expand_conv[0][0]']
ization) )
block6e_expand_activation (Act (None, 12, 12, 1632 0 ['block6e_expand_bn[0][0]']
ivation) )
block6e_dwconv (DepthwiseConv2 (None, 12, 12, 1632 40800 ['block6e_expand_activation[0][0]
D) ) ']
block6e_bn (BatchNormalization (None, 12, 12, 1632 6528 ['block6e_dwconv[0][0]']
) )
block6e_activation (Activation (None, 12, 12, 1632 0 ['block6e_bn[0][0]']
) )
block6e_se_squeeze (GlobalAver (None, 1632) 0 ['block6e_activation[0][0]']
agePooling2D)
block6e_se_reshape (Reshape) (None, 1, 1, 1632) 0 ['block6e_se_squeeze[0][0]']
block6e_se_reduce (Conv2D) (None, 1, 1, 68) 111044 ['block6e_se_reshape[0][0]']
block6e_se_expand (Conv2D) (None, 1, 1, 1632) 112608 ['block6e_se_reduce[0][0]']
block6e_se_excite (Multiply) (None, 12, 12, 1632 0 ['block6e_activation[0][0]',
) 'block6e_se_expand[0][0]']
block6e_project_conv (Conv2D) (None, 12, 12, 272) 443904 ['block6e_se_excite[0][0]']
block6e_project_bn (BatchNorma (None, 12, 12, 272) 1088 ['block6e_project_conv[0][0]']
lization)
block6e_drop (FixedDropout) (None, 12, 12, 272) 0 ['block6e_project_bn[0][0]']
block6e_add (Add) (None, 12, 12, 272) 0 ['block6e_drop[0][0]',
'block6d_add[0][0]']
block6f_expand_conv (Conv2D) (None, 12, 12, 1632 443904 ['block6e_add[0][0]']
)
block6f_expand_bn (BatchNormal (None, 12, 12, 1632 6528 ['block6f_expand_conv[0][0]']
ization) )
block6f_expand_activation (Act (None, 12, 12, 1632 0 ['block6f_expand_bn[0][0]']
ivation) )
block6f_dwconv (DepthwiseConv2 (None, 12, 12, 1632 40800 ['block6f_expand_activation[0][0]
D) ) ']
block6f_bn (BatchNormalization (None, 12, 12, 1632 6528 ['block6f_dwconv[0][0]']
) )
block6f_activation (Activation (None, 12, 12, 1632 0 ['block6f_bn[0][0]']
) )
block6f_se_squeeze (GlobalAver (None, 1632) 0 ['block6f_activation[0][0]']
agePooling2D)
block6f_se_reshape (Reshape) (None, 1, 1, 1632) 0 ['block6f_se_squeeze[0][0]']
block6f_se_reduce (Conv2D) (None, 1, 1, 68) 111044 ['block6f_se_reshape[0][0]']
block6f_se_expand (Conv2D) (None, 1, 1, 1632) 112608 ['block6f_se_reduce[0][0]']
block6f_se_excite (Multiply) (None, 12, 12, 1632 0 ['block6f_activation[0][0]',
) 'block6f_se_expand[0][0]']
block6f_project_conv (Conv2D) (None, 12, 12, 272) 443904 ['block6f_se_excite[0][0]']
block6f_project_bn (BatchNorma (None, 12, 12, 272) 1088 ['block6f_project_conv[0][0]']
lization)
block6f_drop (FixedDropout) (None, 12, 12, 272) 0 ['block6f_project_bn[0][0]']
block6f_add (Add) (None, 12, 12, 272) 0 ['block6f_drop[0][0]',
'block6e_add[0][0]']
block6g_expand_conv (Conv2D) (None, 12, 12, 1632 443904 ['block6f_add[0][0]']
)
block6g_expand_bn (BatchNormal (None, 12, 12, 1632 6528 ['block6g_expand_conv[0][0]']
ization) )
block6g_expand_activation (Act (None, 12, 12, 1632 0 ['block6g_expand_bn[0][0]']
ivation) )
block6g_dwconv (DepthwiseConv2 (None, 12, 12, 1632 40800 ['block6g_expand_activation[0][0]
D) ) ']
block6g_bn (BatchNormalization (None, 12, 12, 1632 6528 ['block6g_dwconv[0][0]']
) )
block6g_activation (Activation (None, 12, 12, 1632 0 ['block6g_bn[0][0]']
) )
block6g_se_squeeze (GlobalAver (None, 1632) 0 ['block6g_activation[0][0]']
agePooling2D)
block6g_se_reshape (Reshape) (None, 1, 1, 1632) 0 ['block6g_se_squeeze[0][0]']
block6g_se_reduce (Conv2D) (None, 1, 1, 68) 111044 ['block6g_se_reshape[0][0]']
block6g_se_expand (Conv2D) (None, 1, 1, 1632) 112608 ['block6g_se_reduce[0][0]']
block6g_se_excite (Multiply) (None, 12, 12, 1632 0 ['block6g_activation[0][0]',
) 'block6g_se_expand[0][0]']
block6g_project_conv (Conv2D) (None, 12, 12, 272) 443904 ['block6g_se_excite[0][0]']
block6g_project_bn (BatchNorma (None, 12, 12, 272) 1088 ['block6g_project_conv[0][0]']
lization)
block6g_drop (FixedDropout) (None, 12, 12, 272) 0 ['block6g_project_bn[0][0]']
block6g_add (Add) (None, 12, 12, 272) 0 ['block6g_drop[0][0]',
'block6f_add[0][0]']
block6h_expand_conv (Conv2D) (None, 12, 12, 1632 443904 ['block6g_add[0][0]']
)
block6h_expand_bn (BatchNormal (None, 12, 12, 1632 6528 ['block6h_expand_conv[0][0]']
ization) )
block6h_expand_activation (Act (None, 12, 12, 1632 0 ['block6h_expand_bn[0][0]']
ivation) )
block6h_dwconv (DepthwiseConv2 (None, 12, 12, 1632 40800 ['block6h_expand_activation[0][0]
D) ) ']
block6h_bn (BatchNormalization (None, 12, 12, 1632 6528 ['block6h_dwconv[0][0]']
) )
block6h_activation (Activation (None, 12, 12, 1632 0 ['block6h_bn[0][0]']
) )
block6h_se_squeeze (GlobalAver (None, 1632) 0 ['block6h_activation[0][0]']
agePooling2D)
block6h_se_reshape (Reshape) (None, 1, 1, 1632) 0 ['block6h_se_squeeze[0][0]']
block6h_se_reduce (Conv2D) (None, 1, 1, 68) 111044 ['block6h_se_reshape[0][0]']
block6h_se_expand (Conv2D) (None, 1, 1, 1632) 112608 ['block6h_se_reduce[0][0]']
block6h_se_excite (Multiply) (None, 12, 12, 1632 0 ['block6h_activation[0][0]',
) 'block6h_se_expand[0][0]']
block6h_project_conv (Conv2D) (None, 12, 12, 272) 443904 ['block6h_se_excite[0][0]']
block6h_project_bn (BatchNorma (None, 12, 12, 272) 1088 ['block6h_project_conv[0][0]']
lization)
block6h_drop (FixedDropout) (None, 12, 12, 272) 0 ['block6h_project_bn[0][0]']
block6h_add (Add) (None, 12, 12, 272) 0 ['block6h_drop[0][0]',
'block6g_add[0][0]']
block7a_expand_conv (Conv2D) (None, 12, 12, 1632 443904 ['block6h_add[0][0]']
)
block7a_expand_bn (BatchNormal (None, 12, 12, 1632 6528 ['block7a_expand_conv[0][0]']
ization) )
block7a_expand_activation (Act (None, 12, 12, 1632 0 ['block7a_expand_bn[0][0]']
ivation) )
block7a_dwconv (DepthwiseConv2 (None, 12, 12, 1632 14688 ['block7a_expand_activation[0][0]
D) ) ']
block7a_bn (BatchNormalization (None, 12, 12, 1632 6528 ['block7a_dwconv[0][0]']
) )
block7a_activation (Activation (None, 12, 12, 1632 0 ['block7a_bn[0][0]']
) )
block7a_se_squeeze (GlobalAver (None, 1632) 0 ['block7a_activation[0][0]']
agePooling2D)
block7a_se_reshape (Reshape) (None, 1, 1, 1632) 0 ['block7a_se_squeeze[0][0]']
block7a_se_reduce (Conv2D) (None, 1, 1, 68) 111044 ['block7a_se_reshape[0][0]']
block7a_se_expand (Conv2D) (None, 1, 1, 1632) 112608 ['block7a_se_reduce[0][0]']
block7a_se_excite (Multiply) (None, 12, 12, 1632 0 ['block7a_activation[0][0]',
) 'block7a_se_expand[0][0]']
block7a_project_conv (Conv2D) (None, 12, 12, 448) 731136 ['block7a_se_excite[0][0]']
block7a_project_bn (BatchNorma (None, 12, 12, 448) 1792 ['block7a_project_conv[0][0]']
lization)
block7b_expand_conv (Conv2D) (None, 12, 12, 2688 1204224 ['block7a_project_bn[0][0]']
)
block7b_expand_bn (BatchNormal (None, 12, 12, 2688 10752 ['block7b_expand_conv[0][0]']
ization) )
block7b_expand_activation (Act (None, 12, 12, 2688 0 ['block7b_expand_bn[0][0]']
ivation) )
block7b_dwconv (DepthwiseConv2 (None, 12, 12, 2688 24192 ['block7b_expand_activation[0][0]
D) ) ']
block7b_bn (BatchNormalization (None, 12, 12, 2688 10752 ['block7b_dwconv[0][0]']
) )
block7b_activation (Activation (None, 12, 12, 2688 0 ['block7b_bn[0][0]']
) )
block7b_se_squeeze (GlobalAver (None, 2688) 0 ['block7b_activation[0][0]']
agePooling2D)
block7b_se_reshape (Reshape) (None, 1, 1, 2688) 0 ['block7b_se_squeeze[0][0]']
block7b_se_reduce (Conv2D) (None, 1, 1, 112) 301168 ['block7b_se_reshape[0][0]']
block7b_se_expand (Conv2D) (None, 1, 1, 2688) 303744 ['block7b_se_reduce[0][0]']
block7b_se_excite (Multiply) (None, 12, 12, 2688 0 ['block7b_activation[0][0]',
) 'block7b_se_expand[0][0]']
block7b_project_conv (Conv2D) (None, 12, 12, 448) 1204224 ['block7b_se_excite[0][0]']
block7b_project_bn (BatchNorma (None, 12, 12, 448) 1792 ['block7b_project_conv[0][0]']
lization)
block7b_drop (FixedDropout) (None, 12, 12, 448) 0 ['block7b_project_bn[0][0]']
block7b_add (Add) (None, 12, 12, 448) 0 ['block7b_drop[0][0]',
'block7a_project_bn[0][0]']
top_conv (Conv2D) (None, 12, 12, 1792 802816 ['block7b_add[0][0]']
)
top_bn (BatchNormalization) (None, 12, 12, 1792 7168 ['top_conv[0][0]']
)
top_activation (Activation) (None, 12, 12, 1792 0 ['top_bn[0][0]']
)
flatten_4 (Flatten) (None, 258048) 0 ['top_activation[0][0]']
dense_8 (Dense) (None, 1024) 264242176 ['flatten_4[0][0]']
dropout_4 (Dropout) (None, 1024) 0 ['dense_8[0][0]']
dense_9 (Dense) (None, 1) 1025 ['dropout_4[0][0]']
==================================================================================================
Total params: 281,917,017
Trainable params: 264,243,201
Non-trainable params: 17,673,816
__________________________________________________________________________________________________
#get total parameters
model_params = model_final.count_params()
# Specify the optimizer, loss function and evaluation metrics.
model_final.compile(loss='binary_crossentropy', optimizer=tf.keras.optimizers.RMSprop(learning_rate=0.0001), metrics=['accuracy'])
t1 = time.time()
#train the model
eff_history = model_final.fit_generator(train_generator, validation_data = validation_generator, steps_per_epoch = 100, epochs = 10)
fit_time = time.time() - t1
<ipython-input-104-b7f31b017b18>:3: UserWarning: `Model.fit_generator` is deprecated and will be removed in a future version. Please use `Model.fit`, which supports generators. eff_history = model_final.fit_generator(train_generator, validation_data = validation_generator, steps_per_epoch = 100, epochs = 10)
Epoch 1/10 100/100 [==============================] - 93s 805ms/step - loss: 0.6515 - accuracy: 0.9295 - val_loss: 0.0976 - val_accuracy: 0.9887 Epoch 2/10 100/100 [==============================] - 76s 754ms/step - loss: 0.5932 - accuracy: 0.9555 - val_loss: 0.0481 - val_accuracy: 0.9937 Epoch 3/10 100/100 [==============================] - 76s 756ms/step - loss: 0.3212 - accuracy: 0.9673 - val_loss: 0.0374 - val_accuracy: 0.9950 Epoch 4/10 100/100 [==============================] - 77s 767ms/step - loss: 0.4213 - accuracy: 0.9645 - val_loss: 0.0330 - val_accuracy: 0.9912 Epoch 5/10 100/100 [==============================] - 76s 763ms/step - loss: 0.4679 - accuracy: 0.9675 - val_loss: 0.0667 - val_accuracy: 0.9912 Epoch 6/10 100/100 [==============================] - 77s 769ms/step - loss: 0.6451 - accuracy: 0.9685 - val_loss: 0.0410 - val_accuracy: 0.9937 Epoch 7/10 100/100 [==============================] - 76s 760ms/step - loss: 0.7479 - accuracy: 0.9602 - val_loss: 0.0122 - val_accuracy: 0.9962 Epoch 8/10 100/100 [==============================] - 76s 763ms/step - loss: 0.4988 - accuracy: 0.9733 - val_loss: 0.0065 - val_accuracy: 0.9975 Epoch 9/10 100/100 [==============================] - 77s 772ms/step - loss: 0.5127 - accuracy: 0.9715 - val_loss: 0.0069 - val_accuracy: 0.9975 Epoch 10/10 100/100 [==============================] - 76s 761ms/step - loss: 0.4433 - accuracy: 0.9725 - val_loss: 0.0030 - val_accuracy: 0.9987
# time it took to fit the model
print(fit_time)
787.6704769134521
#Plot training and validation accuracy and loss for each epoch
acc = eff_history.history['accuracy']
val_acc = eff_history.history['val_accuracy']
loss = eff_history.history['loss']
val_loss = eff_history.history['val_loss']
epochs = range(1,len(acc) + 1)
plt.plot(epochs,acc,label = 'Training Accuracy')
plt.plot(epochs,val_acc,label = 'Validation Accuracy')
plt.title('Training and Validation Accuracy')
plt.legend()
plt.figure()
plt.plot(epochs,loss,label = 'Training loss')
plt.plot(epochs,val_loss,label = 'Validation Loss')
plt.title('Training and Validation Loss')
plt.legend()
plt.show()
# Test dataset
test_datagen = ImageDataGenerator(rescale=1./255)
test_generator = test_datagen.flow_from_directory(
test_dir,
target_size=(380, 380),
shuffle = False,
class_mode='binary',
batch_size=1)
Found 2023 images belonging to 2 classes.
#Get test length
filenames = test_generator.filenames
nb_samples = len(filenames)
#Predict on test set
predict = model_final.predict_generator(test_generator,steps = nb_samples)
<ipython-input-109-7710eff794cf>:2: UserWarning: `Model.predict_generator` is deprecated and will be removed in a future version. Please use `Model.predict`, which supports generators. predict = model_final.predict_generator(test_generator,steps = nb_samples)
#Get list of prediction results
pred_list = []
for i in predict:
if i > 0.5:
result = 1 #dog
pred_list.append(result)
else:
result = 0 #cat
pred_list.append(result)
#Create dataframe of image ID, image true label, image predicted label
import pandas as pd
image_ids = [name.split('/')[-1] for name in test_generator.filenames]
image_label = [name.split('/')[0] for name in test_generator.filenames]
data = {'id': image_ids, 'label':image_label, 'prediction':pred_list}
data_df = pd.DataFrame(data)
data_df.label.replace(('cats', 'dogs'), (0, 1), inplace=True) # change cat and dog label to 0 or 1
#Get test accuracy score
from sklearn.metrics import accuracy_score, confusion_matrix
test_accuracy = accuracy_score(data_df['label'], data_df['prediction'])
print('Test Accuracy: ', round((test_accuracy * 100), 2), "%")
Test Accuracy: 98.71 %
from sklearn.metrics import classification_report
#Classification Report
print(classification_report(data_df['label'], data_df['prediction']))
precision recall f1-score support
0 1.00 0.98 0.99 1011
1 0.98 1.00 0.99 1012
accuracy 0.99 2023
macro avg 0.99 0.99 0.99 2023
weighted avg 0.99 0.99 0.99 2023
#Create confusion matrix
import seaborn as sns
label = [0, 1] #0 = cat and 1 = dog
cm = confusion_matrix(data_df['label'], data_df['prediction'], labels = label)
#Plot
ax= plt.subplot()
sns.heatmap(cm, annot=True, fmt='g', ax=ax);
# labels, title and ticks
ax.set_xlabel('Predicted labels');ax.set_ylabel('True labels');
ax.set_title('Confusion Matrix');
ax.xaxis.set_ticklabels(["Cat", "Dog"]); ax.yaxis.set_ticklabels(["Cat", "Dog"])
[Text(0, 0.5, 'Cat'), Text(0, 1.5, 'Dog')]
ExperimentLog.loc[len(ExperimentLog)] = [
"EfficientNet B4",
380,
"RMSprop",
10,
max(acc),
max(val_acc),
test_accuracy,
fit_time,
model_params
]
ExperimentLog
| Base Model | Input Resolution | Optimizer | Epochs | Training Accuracy | Validation Accuracy | Test Accuracy | Fit Time | Total Parameters | |
|---|---|---|---|---|---|---|---|---|---|
| 0 | EfficientNet B0 | 224 | RMSprop | 10 | 0.951500 | 0.98750 | 0.971330 | 2195.619411 | 68276893 |
| 1 | EfficientNet B0 with decay | 224 | RMSprop | 10 | 0.956171 | 0.98750 | 0.974790 | 381.655278 | 68276893 |
| 2 | EfficientNet B1 | 240 | RMSprop | 10 | 0.959698 | 0.99125 | 0.985171 | 349.486195 | 90463361 |
| 3 | EfficientNet B2 | 260 | RMSprop | 10 | 0.965239 | 0.99500 | 0.983193 | 380.205944 | 124555763 |
| 4 | EfficientNet B3 | 300 | RMSprop | 10 | 0.970277 | 0.99750 | 0.988631 | 516.328792 | 168071977 |
| 5 | EfficientNet B4 | 380 | RMSprop | 10 | 0.973300 | 0.99875 | 0.987148 | 787.670477 | 281917017 |
# Add rescaling and augmentation to ImageDataGenerator for the training set
train_datagen = ImageDataGenerator(rescale = 1./255., rotation_range = 40, width_shift_range = 0.2, height_shift_range = 0.2, shear_range = 0.2, zoom_range = 0.2, horizontal_flip = True, validation_split=0.1) # set validation split
# Rescale validation set. No augmentation on the validation set.
validation_datagen = ImageDataGenerator(rescale = 1./255.,validation_split=0.1) # set validation split
#Read images directly from directory.
train_generator = train_datagen.flow_from_directory(train_dir, seed = 42, shuffle = True, batch_size = 20, class_mode = 'binary', target_size = (456, 456), subset='training') #set as training data
validation_generator = validation_datagen.flow_from_directory(train_dir, seed = 42, shuffle = True, batch_size = 20, class_mode = 'binary', target_size = (456, 456), subset='validation') # same directory as training data. Set as validation data
Found 7205 images belonging to 2 classes. Found 800 images belonging to 2 classes.
#Instantiates the EfficientNet architecture
base_model = efn.EfficientNetB5(input_shape = (456, 456, 3), include_top = False, weights = 'imagenet')
Downloading data from https://github.com/Callidior/keras-applications/releases/download/efficientnet/efficientnet-b5_weights_tf_dim_ordering_tf_kernels_autoaugment_notop.h5 115515256/115515256 [==============================] - 15s 0us/step
# Set trainable attribute to false for all of the base model layers
for layer in base_model.layers:
layer.trainable = False
#Build on top of existing base model.
x = base_model.output
x = layers.Flatten()(x) #convert to 1D array
x = layers.Dense(1024, activation="relu")(x) #fully connected layer with 1,024 hidden units and ReLU activation
x = layers.Dropout(0.5)(x) #Drops 50% of inputs to zero at each training iteration (prevents overfitting)
# Add a final sigmoid layer with 1 node for classification output (probability between 0 and 1)
predictions = layers.Dense(1, activation="sigmoid")(x)
model_final = Model(inputs = base_model.input, outputs = predictions)
#Print model summary
from torchsummary import summary
model_sum = model_final.summary()
Model: "model_5"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_6 (InputLayer) [(None, 456, 456, 3 0 []
)]
stem_conv (Conv2D) (None, 228, 228, 48 1296 ['input_6[0][0]']
)
stem_bn (BatchNormalization) (None, 228, 228, 48 192 ['stem_conv[0][0]']
)
stem_activation (Activation) (None, 228, 228, 48 0 ['stem_bn[0][0]']
)
block1a_dwconv (DepthwiseConv2 (None, 228, 228, 48 432 ['stem_activation[0][0]']
D) )
block1a_bn (BatchNormalization (None, 228, 228, 48 192 ['block1a_dwconv[0][0]']
) )
block1a_activation (Activation (None, 228, 228, 48 0 ['block1a_bn[0][0]']
) )
block1a_se_squeeze (GlobalAver (None, 48) 0 ['block1a_activation[0][0]']
agePooling2D)
block1a_se_reshape (Reshape) (None, 1, 1, 48) 0 ['block1a_se_squeeze[0][0]']
block1a_se_reduce (Conv2D) (None, 1, 1, 12) 588 ['block1a_se_reshape[0][0]']
block1a_se_expand (Conv2D) (None, 1, 1, 48) 624 ['block1a_se_reduce[0][0]']
block1a_se_excite (Multiply) (None, 228, 228, 48 0 ['block1a_activation[0][0]',
) 'block1a_se_expand[0][0]']
block1a_project_conv (Conv2D) (None, 228, 228, 24 1152 ['block1a_se_excite[0][0]']
)
block1a_project_bn (BatchNorma (None, 228, 228, 24 96 ['block1a_project_conv[0][0]']
lization) )
block1b_dwconv (DepthwiseConv2 (None, 228, 228, 24 216 ['block1a_project_bn[0][0]']
D) )
block1b_bn (BatchNormalization (None, 228, 228, 24 96 ['block1b_dwconv[0][0]']
) )
block1b_activation (Activation (None, 228, 228, 24 0 ['block1b_bn[0][0]']
) )
block1b_se_squeeze (GlobalAver (None, 24) 0 ['block1b_activation[0][0]']
agePooling2D)
block1b_se_reshape (Reshape) (None, 1, 1, 24) 0 ['block1b_se_squeeze[0][0]']
block1b_se_reduce (Conv2D) (None, 1, 1, 6) 150 ['block1b_se_reshape[0][0]']
block1b_se_expand (Conv2D) (None, 1, 1, 24) 168 ['block1b_se_reduce[0][0]']
block1b_se_excite (Multiply) (None, 228, 228, 24 0 ['block1b_activation[0][0]',
) 'block1b_se_expand[0][0]']
block1b_project_conv (Conv2D) (None, 228, 228, 24 576 ['block1b_se_excite[0][0]']
)
block1b_project_bn (BatchNorma (None, 228, 228, 24 96 ['block1b_project_conv[0][0]']
lization) )
block1b_drop (FixedDropout) (None, 228, 228, 24 0 ['block1b_project_bn[0][0]']
)
block1b_add (Add) (None, 228, 228, 24 0 ['block1b_drop[0][0]',
) 'block1a_project_bn[0][0]']
block1c_dwconv (DepthwiseConv2 (None, 228, 228, 24 216 ['block1b_add[0][0]']
D) )
block1c_bn (BatchNormalization (None, 228, 228, 24 96 ['block1c_dwconv[0][0]']
) )
block1c_activation (Activation (None, 228, 228, 24 0 ['block1c_bn[0][0]']
) )
block1c_se_squeeze (GlobalAver (None, 24) 0 ['block1c_activation[0][0]']
agePooling2D)
block1c_se_reshape (Reshape) (None, 1, 1, 24) 0 ['block1c_se_squeeze[0][0]']
block1c_se_reduce (Conv2D) (None, 1, 1, 6) 150 ['block1c_se_reshape[0][0]']
block1c_se_expand (Conv2D) (None, 1, 1, 24) 168 ['block1c_se_reduce[0][0]']
block1c_se_excite (Multiply) (None, 228, 228, 24 0 ['block1c_activation[0][0]',
) 'block1c_se_expand[0][0]']
block1c_project_conv (Conv2D) (None, 228, 228, 24 576 ['block1c_se_excite[0][0]']
)
block1c_project_bn (BatchNorma (None, 228, 228, 24 96 ['block1c_project_conv[0][0]']
lization) )
block1c_drop (FixedDropout) (None, 228, 228, 24 0 ['block1c_project_bn[0][0]']
)
block1c_add (Add) (None, 228, 228, 24 0 ['block1c_drop[0][0]',
) 'block1b_add[0][0]']
block2a_expand_conv (Conv2D) (None, 228, 228, 14 3456 ['block1c_add[0][0]']
4)
block2a_expand_bn (BatchNormal (None, 228, 228, 14 576 ['block2a_expand_conv[0][0]']
ization) 4)
block2a_expand_activation (Act (None, 228, 228, 14 0 ['block2a_expand_bn[0][0]']
ivation) 4)
block2a_dwconv (DepthwiseConv2 (None, 114, 114, 14 1296 ['block2a_expand_activation[0][0]
D) 4) ']
block2a_bn (BatchNormalization (None, 114, 114, 14 576 ['block2a_dwconv[0][0]']
) 4)
block2a_activation (Activation (None, 114, 114, 14 0 ['block2a_bn[0][0]']
) 4)
block2a_se_squeeze (GlobalAver (None, 144) 0 ['block2a_activation[0][0]']
agePooling2D)
block2a_se_reshape (Reshape) (None, 1, 1, 144) 0 ['block2a_se_squeeze[0][0]']
block2a_se_reduce (Conv2D) (None, 1, 1, 6) 870 ['block2a_se_reshape[0][0]']
block2a_se_expand (Conv2D) (None, 1, 1, 144) 1008 ['block2a_se_reduce[0][0]']
block2a_se_excite (Multiply) (None, 114, 114, 14 0 ['block2a_activation[0][0]',
4) 'block2a_se_expand[0][0]']
block2a_project_conv (Conv2D) (None, 114, 114, 40 5760 ['block2a_se_excite[0][0]']
)
block2a_project_bn (BatchNorma (None, 114, 114, 40 160 ['block2a_project_conv[0][0]']
lization) )
block2b_expand_conv (Conv2D) (None, 114, 114, 24 9600 ['block2a_project_bn[0][0]']
0)
block2b_expand_bn (BatchNormal (None, 114, 114, 24 960 ['block2b_expand_conv[0][0]']
ization) 0)
block2b_expand_activation (Act (None, 114, 114, 24 0 ['block2b_expand_bn[0][0]']
ivation) 0)
block2b_dwconv (DepthwiseConv2 (None, 114, 114, 24 2160 ['block2b_expand_activation[0][0]
D) 0) ']
block2b_bn (BatchNormalization (None, 114, 114, 24 960 ['block2b_dwconv[0][0]']
) 0)
block2b_activation (Activation (None, 114, 114, 24 0 ['block2b_bn[0][0]']
) 0)
block2b_se_squeeze (GlobalAver (None, 240) 0 ['block2b_activation[0][0]']
agePooling2D)
block2b_se_reshape (Reshape) (None, 1, 1, 240) 0 ['block2b_se_squeeze[0][0]']
block2b_se_reduce (Conv2D) (None, 1, 1, 10) 2410 ['block2b_se_reshape[0][0]']
block2b_se_expand (Conv2D) (None, 1, 1, 240) 2640 ['block2b_se_reduce[0][0]']
block2b_se_excite (Multiply) (None, 114, 114, 24 0 ['block2b_activation[0][0]',
0) 'block2b_se_expand[0][0]']
block2b_project_conv (Conv2D) (None, 114, 114, 40 9600 ['block2b_se_excite[0][0]']
)
block2b_project_bn (BatchNorma (None, 114, 114, 40 160 ['block2b_project_conv[0][0]']
lization) )
block2b_drop (FixedDropout) (None, 114, 114, 40 0 ['block2b_project_bn[0][0]']
)
block2b_add (Add) (None, 114, 114, 40 0 ['block2b_drop[0][0]',
) 'block2a_project_bn[0][0]']
block2c_expand_conv (Conv2D) (None, 114, 114, 24 9600 ['block2b_add[0][0]']
0)
block2c_expand_bn (BatchNormal (None, 114, 114, 24 960 ['block2c_expand_conv[0][0]']
ization) 0)
block2c_expand_activation (Act (None, 114, 114, 24 0 ['block2c_expand_bn[0][0]']
ivation) 0)
block2c_dwconv (DepthwiseConv2 (None, 114, 114, 24 2160 ['block2c_expand_activation[0][0]
D) 0) ']
block2c_bn (BatchNormalization (None, 114, 114, 24 960 ['block2c_dwconv[0][0]']
) 0)
block2c_activation (Activation (None, 114, 114, 24 0 ['block2c_bn[0][0]']
) 0)
block2c_se_squeeze (GlobalAver (None, 240) 0 ['block2c_activation[0][0]']
agePooling2D)
block2c_se_reshape (Reshape) (None, 1, 1, 240) 0 ['block2c_se_squeeze[0][0]']
block2c_se_reduce (Conv2D) (None, 1, 1, 10) 2410 ['block2c_se_reshape[0][0]']
block2c_se_expand (Conv2D) (None, 1, 1, 240) 2640 ['block2c_se_reduce[0][0]']
block2c_se_excite (Multiply) (None, 114, 114, 24 0 ['block2c_activation[0][0]',
0) 'block2c_se_expand[0][0]']
block2c_project_conv (Conv2D) (None, 114, 114, 40 9600 ['block2c_se_excite[0][0]']
)
block2c_project_bn (BatchNorma (None, 114, 114, 40 160 ['block2c_project_conv[0][0]']
lization) )
block2c_drop (FixedDropout) (None, 114, 114, 40 0 ['block2c_project_bn[0][0]']
)
block2c_add (Add) (None, 114, 114, 40 0 ['block2c_drop[0][0]',
) 'block2b_add[0][0]']
block2d_expand_conv (Conv2D) (None, 114, 114, 24 9600 ['block2c_add[0][0]']
0)
block2d_expand_bn (BatchNormal (None, 114, 114, 24 960 ['block2d_expand_conv[0][0]']
ization) 0)
block2d_expand_activation (Act (None, 114, 114, 24 0 ['block2d_expand_bn[0][0]']
ivation) 0)
block2d_dwconv (DepthwiseConv2 (None, 114, 114, 24 2160 ['block2d_expand_activation[0][0]
D) 0) ']
block2d_bn (BatchNormalization (None, 114, 114, 24 960 ['block2d_dwconv[0][0]']
) 0)
block2d_activation (Activation (None, 114, 114, 24 0 ['block2d_bn[0][0]']
) 0)
block2d_se_squeeze (GlobalAver (None, 240) 0 ['block2d_activation[0][0]']
agePooling2D)
block2d_se_reshape (Reshape) (None, 1, 1, 240) 0 ['block2d_se_squeeze[0][0]']
block2d_se_reduce (Conv2D) (None, 1, 1, 10) 2410 ['block2d_se_reshape[0][0]']
block2d_se_expand (Conv2D) (None, 1, 1, 240) 2640 ['block2d_se_reduce[0][0]']
block2d_se_excite (Multiply) (None, 114, 114, 24 0 ['block2d_activation[0][0]',
0) 'block2d_se_expand[0][0]']
block2d_project_conv (Conv2D) (None, 114, 114, 40 9600 ['block2d_se_excite[0][0]']
)
block2d_project_bn (BatchNorma (None, 114, 114, 40 160 ['block2d_project_conv[0][0]']
lization) )
block2d_drop (FixedDropout) (None, 114, 114, 40 0 ['block2d_project_bn[0][0]']
)
block2d_add (Add) (None, 114, 114, 40 0 ['block2d_drop[0][0]',
) 'block2c_add[0][0]']
block2e_expand_conv (Conv2D) (None, 114, 114, 24 9600 ['block2d_add[0][0]']
0)
block2e_expand_bn (BatchNormal (None, 114, 114, 24 960 ['block2e_expand_conv[0][0]']
ization) 0)
block2e_expand_activation (Act (None, 114, 114, 24 0 ['block2e_expand_bn[0][0]']
ivation) 0)
block2e_dwconv (DepthwiseConv2 (None, 114, 114, 24 2160 ['block2e_expand_activation[0][0]
D) 0) ']
block2e_bn (BatchNormalization (None, 114, 114, 24 960 ['block2e_dwconv[0][0]']
) 0)
block2e_activation (Activation (None, 114, 114, 24 0 ['block2e_bn[0][0]']
) 0)
block2e_se_squeeze (GlobalAver (None, 240) 0 ['block2e_activation[0][0]']
agePooling2D)
block2e_se_reshape (Reshape) (None, 1, 1, 240) 0 ['block2e_se_squeeze[0][0]']
block2e_se_reduce (Conv2D) (None, 1, 1, 10) 2410 ['block2e_se_reshape[0][0]']
block2e_se_expand (Conv2D) (None, 1, 1, 240) 2640 ['block2e_se_reduce[0][0]']
block2e_se_excite (Multiply) (None, 114, 114, 24 0 ['block2e_activation[0][0]',
0) 'block2e_se_expand[0][0]']
block2e_project_conv (Conv2D) (None, 114, 114, 40 9600 ['block2e_se_excite[0][0]']
)
block2e_project_bn (BatchNorma (None, 114, 114, 40 160 ['block2e_project_conv[0][0]']
lization) )
block2e_drop (FixedDropout) (None, 114, 114, 40 0 ['block2e_project_bn[0][0]']
)
block2e_add (Add) (None, 114, 114, 40 0 ['block2e_drop[0][0]',
) 'block2d_add[0][0]']
block3a_expand_conv (Conv2D) (None, 114, 114, 24 9600 ['block2e_add[0][0]']
0)
block3a_expand_bn (BatchNormal (None, 114, 114, 24 960 ['block3a_expand_conv[0][0]']
ization) 0)
block3a_expand_activation (Act (None, 114, 114, 24 0 ['block3a_expand_bn[0][0]']
ivation) 0)
block3a_dwconv (DepthwiseConv2 (None, 57, 57, 240) 6000 ['block3a_expand_activation[0][0]
D) ']
block3a_bn (BatchNormalization (None, 57, 57, 240) 960 ['block3a_dwconv[0][0]']
)
block3a_activation (Activation (None, 57, 57, 240) 0 ['block3a_bn[0][0]']
)
block3a_se_squeeze (GlobalAver (None, 240) 0 ['block3a_activation[0][0]']
agePooling2D)
block3a_se_reshape (Reshape) (None, 1, 1, 240) 0 ['block3a_se_squeeze[0][0]']
block3a_se_reduce (Conv2D) (None, 1, 1, 10) 2410 ['block3a_se_reshape[0][0]']
block3a_se_expand (Conv2D) (None, 1, 1, 240) 2640 ['block3a_se_reduce[0][0]']
block3a_se_excite (Multiply) (None, 57, 57, 240) 0 ['block3a_activation[0][0]',
'block3a_se_expand[0][0]']
block3a_project_conv (Conv2D) (None, 57, 57, 64) 15360 ['block3a_se_excite[0][0]']
block3a_project_bn (BatchNorma (None, 57, 57, 64) 256 ['block3a_project_conv[0][0]']
lization)
block3b_expand_conv (Conv2D) (None, 57, 57, 384) 24576 ['block3a_project_bn[0][0]']
block3b_expand_bn (BatchNormal (None, 57, 57, 384) 1536 ['block3b_expand_conv[0][0]']
ization)
block3b_expand_activation (Act (None, 57, 57, 384) 0 ['block3b_expand_bn[0][0]']
ivation)
block3b_dwconv (DepthwiseConv2 (None, 57, 57, 384) 9600 ['block3b_expand_activation[0][0]
D) ']
block3b_bn (BatchNormalization (None, 57, 57, 384) 1536 ['block3b_dwconv[0][0]']
)
block3b_activation (Activation (None, 57, 57, 384) 0 ['block3b_bn[0][0]']
)
block3b_se_squeeze (GlobalAver (None, 384) 0 ['block3b_activation[0][0]']
agePooling2D)
block3b_se_reshape (Reshape) (None, 1, 1, 384) 0 ['block3b_se_squeeze[0][0]']
block3b_se_reduce (Conv2D) (None, 1, 1, 16) 6160 ['block3b_se_reshape[0][0]']
block3b_se_expand (Conv2D) (None, 1, 1, 384) 6528 ['block3b_se_reduce[0][0]']
block3b_se_excite (Multiply) (None, 57, 57, 384) 0 ['block3b_activation[0][0]',
'block3b_se_expand[0][0]']
block3b_project_conv (Conv2D) (None, 57, 57, 64) 24576 ['block3b_se_excite[0][0]']
block3b_project_bn (BatchNorma (None, 57, 57, 64) 256 ['block3b_project_conv[0][0]']
lization)
block3b_drop (FixedDropout) (None, 57, 57, 64) 0 ['block3b_project_bn[0][0]']
block3b_add (Add) (None, 57, 57, 64) 0 ['block3b_drop[0][0]',
'block3a_project_bn[0][0]']
block3c_expand_conv (Conv2D) (None, 57, 57, 384) 24576 ['block3b_add[0][0]']
block3c_expand_bn (BatchNormal (None, 57, 57, 384) 1536 ['block3c_expand_conv[0][0]']
ization)
block3c_expand_activation (Act (None, 57, 57, 384) 0 ['block3c_expand_bn[0][0]']
ivation)
block3c_dwconv (DepthwiseConv2 (None, 57, 57, 384) 9600 ['block3c_expand_activation[0][0]
D) ']
block3c_bn (BatchNormalization (None, 57, 57, 384) 1536 ['block3c_dwconv[0][0]']
)
block3c_activation (Activation (None, 57, 57, 384) 0 ['block3c_bn[0][0]']
)
block3c_se_squeeze (GlobalAver (None, 384) 0 ['block3c_activation[0][0]']
agePooling2D)
block3c_se_reshape (Reshape) (None, 1, 1, 384) 0 ['block3c_se_squeeze[0][0]']
block3c_se_reduce (Conv2D) (None, 1, 1, 16) 6160 ['block3c_se_reshape[0][0]']
block3c_se_expand (Conv2D) (None, 1, 1, 384) 6528 ['block3c_se_reduce[0][0]']
block3c_se_excite (Multiply) (None, 57, 57, 384) 0 ['block3c_activation[0][0]',
'block3c_se_expand[0][0]']
block3c_project_conv (Conv2D) (None, 57, 57, 64) 24576 ['block3c_se_excite[0][0]']
block3c_project_bn (BatchNorma (None, 57, 57, 64) 256 ['block3c_project_conv[0][0]']
lization)
block3c_drop (FixedDropout) (None, 57, 57, 64) 0 ['block3c_project_bn[0][0]']
block3c_add (Add) (None, 57, 57, 64) 0 ['block3c_drop[0][0]',
'block3b_add[0][0]']
block3d_expand_conv (Conv2D) (None, 57, 57, 384) 24576 ['block3c_add[0][0]']
block3d_expand_bn (BatchNormal (None, 57, 57, 384) 1536 ['block3d_expand_conv[0][0]']
ization)
block3d_expand_activation (Act (None, 57, 57, 384) 0 ['block3d_expand_bn[0][0]']
ivation)
block3d_dwconv (DepthwiseConv2 (None, 57, 57, 384) 9600 ['block3d_expand_activation[0][0]
D) ']
block3d_bn (BatchNormalization (None, 57, 57, 384) 1536 ['block3d_dwconv[0][0]']
)
block3d_activation (Activation (None, 57, 57, 384) 0 ['block3d_bn[0][0]']
)
block3d_se_squeeze (GlobalAver (None, 384) 0 ['block3d_activation[0][0]']
agePooling2D)
block3d_se_reshape (Reshape) (None, 1, 1, 384) 0 ['block3d_se_squeeze[0][0]']
block3d_se_reduce (Conv2D) (None, 1, 1, 16) 6160 ['block3d_se_reshape[0][0]']
block3d_se_expand (Conv2D) (None, 1, 1, 384) 6528 ['block3d_se_reduce[0][0]']
block3d_se_excite (Multiply) (None, 57, 57, 384) 0 ['block3d_activation[0][0]',
'block3d_se_expand[0][0]']
block3d_project_conv (Conv2D) (None, 57, 57, 64) 24576 ['block3d_se_excite[0][0]']
block3d_project_bn (BatchNorma (None, 57, 57, 64) 256 ['block3d_project_conv[0][0]']
lization)
block3d_drop (FixedDropout) (None, 57, 57, 64) 0 ['block3d_project_bn[0][0]']
block3d_add (Add) (None, 57, 57, 64) 0 ['block3d_drop[0][0]',
'block3c_add[0][0]']
block3e_expand_conv (Conv2D) (None, 57, 57, 384) 24576 ['block3d_add[0][0]']
block3e_expand_bn (BatchNormal (None, 57, 57, 384) 1536 ['block3e_expand_conv[0][0]']
ization)
block3e_expand_activation (Act (None, 57, 57, 384) 0 ['block3e_expand_bn[0][0]']
ivation)
block3e_dwconv (DepthwiseConv2 (None, 57, 57, 384) 9600 ['block3e_expand_activation[0][0]
D) ']
block3e_bn (BatchNormalization (None, 57, 57, 384) 1536 ['block3e_dwconv[0][0]']
)
block3e_activation (Activation (None, 57, 57, 384) 0 ['block3e_bn[0][0]']
)
block3e_se_squeeze (GlobalAver (None, 384) 0 ['block3e_activation[0][0]']
agePooling2D)
block3e_se_reshape (Reshape) (None, 1, 1, 384) 0 ['block3e_se_squeeze[0][0]']
block3e_se_reduce (Conv2D) (None, 1, 1, 16) 6160 ['block3e_se_reshape[0][0]']
block3e_se_expand (Conv2D) (None, 1, 1, 384) 6528 ['block3e_se_reduce[0][0]']
block3e_se_excite (Multiply) (None, 57, 57, 384) 0 ['block3e_activation[0][0]',
'block3e_se_expand[0][0]']
block3e_project_conv (Conv2D) (None, 57, 57, 64) 24576 ['block3e_se_excite[0][0]']
block3e_project_bn (BatchNorma (None, 57, 57, 64) 256 ['block3e_project_conv[0][0]']
lization)
block3e_drop (FixedDropout) (None, 57, 57, 64) 0 ['block3e_project_bn[0][0]']
block3e_add (Add) (None, 57, 57, 64) 0 ['block3e_drop[0][0]',
'block3d_add[0][0]']
block4a_expand_conv (Conv2D) (None, 57, 57, 384) 24576 ['block3e_add[0][0]']
block4a_expand_bn (BatchNormal (None, 57, 57, 384) 1536 ['block4a_expand_conv[0][0]']
ization)
block4a_expand_activation (Act (None, 57, 57, 384) 0 ['block4a_expand_bn[0][0]']
ivation)
block4a_dwconv (DepthwiseConv2 (None, 29, 29, 384) 3456 ['block4a_expand_activation[0][0]
D) ']
block4a_bn (BatchNormalization (None, 29, 29, 384) 1536 ['block4a_dwconv[0][0]']
)
block4a_activation (Activation (None, 29, 29, 384) 0 ['block4a_bn[0][0]']
)
block4a_se_squeeze (GlobalAver (None, 384) 0 ['block4a_activation[0][0]']
agePooling2D)
block4a_se_reshape (Reshape) (None, 1, 1, 384) 0 ['block4a_se_squeeze[0][0]']
block4a_se_reduce (Conv2D) (None, 1, 1, 16) 6160 ['block4a_se_reshape[0][0]']
block4a_se_expand (Conv2D) (None, 1, 1, 384) 6528 ['block4a_se_reduce[0][0]']
block4a_se_excite (Multiply) (None, 29, 29, 384) 0 ['block4a_activation[0][0]',
'block4a_se_expand[0][0]']
block4a_project_conv (Conv2D) (None, 29, 29, 128) 49152 ['block4a_se_excite[0][0]']
block4a_project_bn (BatchNorma (None, 29, 29, 128) 512 ['block4a_project_conv[0][0]']
lization)
block4b_expand_conv (Conv2D) (None, 29, 29, 768) 98304 ['block4a_project_bn[0][0]']
block4b_expand_bn (BatchNormal (None, 29, 29, 768) 3072 ['block4b_expand_conv[0][0]']
ization)
block4b_expand_activation (Act (None, 29, 29, 768) 0 ['block4b_expand_bn[0][0]']
ivation)
block4b_dwconv (DepthwiseConv2 (None, 29, 29, 768) 6912 ['block4b_expand_activation[0][0]
D) ']
block4b_bn (BatchNormalization (None, 29, 29, 768) 3072 ['block4b_dwconv[0][0]']
)
block4b_activation (Activation (None, 29, 29, 768) 0 ['block4b_bn[0][0]']
)
block4b_se_squeeze (GlobalAver (None, 768) 0 ['block4b_activation[0][0]']
agePooling2D)
block4b_se_reshape (Reshape) (None, 1, 1, 768) 0 ['block4b_se_squeeze[0][0]']
block4b_se_reduce (Conv2D) (None, 1, 1, 32) 24608 ['block4b_se_reshape[0][0]']
block4b_se_expand (Conv2D) (None, 1, 1, 768) 25344 ['block4b_se_reduce[0][0]']
block4b_se_excite (Multiply) (None, 29, 29, 768) 0 ['block4b_activation[0][0]',
'block4b_se_expand[0][0]']
block4b_project_conv (Conv2D) (None, 29, 29, 128) 98304 ['block4b_se_excite[0][0]']
block4b_project_bn (BatchNorma (None, 29, 29, 128) 512 ['block4b_project_conv[0][0]']
lization)
block4b_drop (FixedDropout) (None, 29, 29, 128) 0 ['block4b_project_bn[0][0]']
block4b_add (Add) (None, 29, 29, 128) 0 ['block4b_drop[0][0]',
'block4a_project_bn[0][0]']
block4c_expand_conv (Conv2D) (None, 29, 29, 768) 98304 ['block4b_add[0][0]']
block4c_expand_bn (BatchNormal (None, 29, 29, 768) 3072 ['block4c_expand_conv[0][0]']
ization)
block4c_expand_activation (Act (None, 29, 29, 768) 0 ['block4c_expand_bn[0][0]']
ivation)
block4c_dwconv (DepthwiseConv2 (None, 29, 29, 768) 6912 ['block4c_expand_activation[0][0]
D) ']
block4c_bn (BatchNormalization (None, 29, 29, 768) 3072 ['block4c_dwconv[0][0]']
)
block4c_activation (Activation (None, 29, 29, 768) 0 ['block4c_bn[0][0]']
)
block4c_se_squeeze (GlobalAver (None, 768) 0 ['block4c_activation[0][0]']
agePooling2D)
block4c_se_reshape (Reshape) (None, 1, 1, 768) 0 ['block4c_se_squeeze[0][0]']
block4c_se_reduce (Conv2D) (None, 1, 1, 32) 24608 ['block4c_se_reshape[0][0]']
block4c_se_expand (Conv2D) (None, 1, 1, 768) 25344 ['block4c_se_reduce[0][0]']
block4c_se_excite (Multiply) (None, 29, 29, 768) 0 ['block4c_activation[0][0]',
'block4c_se_expand[0][0]']
block4c_project_conv (Conv2D) (None, 29, 29, 128) 98304 ['block4c_se_excite[0][0]']
block4c_project_bn (BatchNorma (None, 29, 29, 128) 512 ['block4c_project_conv[0][0]']
lization)
block4c_drop (FixedDropout) (None, 29, 29, 128) 0 ['block4c_project_bn[0][0]']
block4c_add (Add) (None, 29, 29, 128) 0 ['block4c_drop[0][0]',
'block4b_add[0][0]']
block4d_expand_conv (Conv2D) (None, 29, 29, 768) 98304 ['block4c_add[0][0]']
block4d_expand_bn (BatchNormal (None, 29, 29, 768) 3072 ['block4d_expand_conv[0][0]']
ization)
block4d_expand_activation (Act (None, 29, 29, 768) 0 ['block4d_expand_bn[0][0]']
ivation)
block4d_dwconv (DepthwiseConv2 (None, 29, 29, 768) 6912 ['block4d_expand_activation[0][0]
D) ']
block4d_bn (BatchNormalization (None, 29, 29, 768) 3072 ['block4d_dwconv[0][0]']
)
block4d_activation (Activation (None, 29, 29, 768) 0 ['block4d_bn[0][0]']
)
block4d_se_squeeze (GlobalAver (None, 768) 0 ['block4d_activation[0][0]']
agePooling2D)
block4d_se_reshape (Reshape) (None, 1, 1, 768) 0 ['block4d_se_squeeze[0][0]']
block4d_se_reduce (Conv2D) (None, 1, 1, 32) 24608 ['block4d_se_reshape[0][0]']
block4d_se_expand (Conv2D) (None, 1, 1, 768) 25344 ['block4d_se_reduce[0][0]']
block4d_se_excite (Multiply) (None, 29, 29, 768) 0 ['block4d_activation[0][0]',
'block4d_se_expand[0][0]']
block4d_project_conv (Conv2D) (None, 29, 29, 128) 98304 ['block4d_se_excite[0][0]']
block4d_project_bn (BatchNorma (None, 29, 29, 128) 512 ['block4d_project_conv[0][0]']
lization)
block4d_drop (FixedDropout) (None, 29, 29, 128) 0 ['block4d_project_bn[0][0]']
block4d_add (Add) (None, 29, 29, 128) 0 ['block4d_drop[0][0]',
'block4c_add[0][0]']
block4e_expand_conv (Conv2D) (None, 29, 29, 768) 98304 ['block4d_add[0][0]']
block4e_expand_bn (BatchNormal (None, 29, 29, 768) 3072 ['block4e_expand_conv[0][0]']
ization)
block4e_expand_activation (Act (None, 29, 29, 768) 0 ['block4e_expand_bn[0][0]']
ivation)
block4e_dwconv (DepthwiseConv2 (None, 29, 29, 768) 6912 ['block4e_expand_activation[0][0]
D) ']
block4e_bn (BatchNormalization (None, 29, 29, 768) 3072 ['block4e_dwconv[0][0]']
)
block4e_activation (Activation (None, 29, 29, 768) 0 ['block4e_bn[0][0]']
)
block4e_se_squeeze (GlobalAver (None, 768) 0 ['block4e_activation[0][0]']
agePooling2D)
block4e_se_reshape (Reshape) (None, 1, 1, 768) 0 ['block4e_se_squeeze[0][0]']
block4e_se_reduce (Conv2D) (None, 1, 1, 32) 24608 ['block4e_se_reshape[0][0]']
block4e_se_expand (Conv2D) (None, 1, 1, 768) 25344 ['block4e_se_reduce[0][0]']
block4e_se_excite (Multiply) (None, 29, 29, 768) 0 ['block4e_activation[0][0]',
'block4e_se_expand[0][0]']
block4e_project_conv (Conv2D) (None, 29, 29, 128) 98304 ['block4e_se_excite[0][0]']
block4e_project_bn (BatchNorma (None, 29, 29, 128) 512 ['block4e_project_conv[0][0]']
lization)
block4e_drop (FixedDropout) (None, 29, 29, 128) 0 ['block4e_project_bn[0][0]']
block4e_add (Add) (None, 29, 29, 128) 0 ['block4e_drop[0][0]',
'block4d_add[0][0]']
block4f_expand_conv (Conv2D) (None, 29, 29, 768) 98304 ['block4e_add[0][0]']
block4f_expand_bn (BatchNormal (None, 29, 29, 768) 3072 ['block4f_expand_conv[0][0]']
ization)
block4f_expand_activation (Act (None, 29, 29, 768) 0 ['block4f_expand_bn[0][0]']
ivation)
block4f_dwconv (DepthwiseConv2 (None, 29, 29, 768) 6912 ['block4f_expand_activation[0][0]
D) ']
block4f_bn (BatchNormalization (None, 29, 29, 768) 3072 ['block4f_dwconv[0][0]']
)
block4f_activation (Activation (None, 29, 29, 768) 0 ['block4f_bn[0][0]']
)
block4f_se_squeeze (GlobalAver (None, 768) 0 ['block4f_activation[0][0]']
agePooling2D)
block4f_se_reshape (Reshape) (None, 1, 1, 768) 0 ['block4f_se_squeeze[0][0]']
block4f_se_reduce (Conv2D) (None, 1, 1, 32) 24608 ['block4f_se_reshape[0][0]']
block4f_se_expand (Conv2D) (None, 1, 1, 768) 25344 ['block4f_se_reduce[0][0]']
block4f_se_excite (Multiply) (None, 29, 29, 768) 0 ['block4f_activation[0][0]',
'block4f_se_expand[0][0]']
block4f_project_conv (Conv2D) (None, 29, 29, 128) 98304 ['block4f_se_excite[0][0]']
block4f_project_bn (BatchNorma (None, 29, 29, 128) 512 ['block4f_project_conv[0][0]']
lization)
block4f_drop (FixedDropout) (None, 29, 29, 128) 0 ['block4f_project_bn[0][0]']
block4f_add (Add) (None, 29, 29, 128) 0 ['block4f_drop[0][0]',
'block4e_add[0][0]']
block4g_expand_conv (Conv2D) (None, 29, 29, 768) 98304 ['block4f_add[0][0]']
block4g_expand_bn (BatchNormal (None, 29, 29, 768) 3072 ['block4g_expand_conv[0][0]']
ization)
block4g_expand_activation (Act (None, 29, 29, 768) 0 ['block4g_expand_bn[0][0]']
ivation)
block4g_dwconv (DepthwiseConv2 (None, 29, 29, 768) 6912 ['block4g_expand_activation[0][0]
D) ']
block4g_bn (BatchNormalization (None, 29, 29, 768) 3072 ['block4g_dwconv[0][0]']
)
block4g_activation (Activation (None, 29, 29, 768) 0 ['block4g_bn[0][0]']
)
block4g_se_squeeze (GlobalAver (None, 768) 0 ['block4g_activation[0][0]']
agePooling2D)
block4g_se_reshape (Reshape) (None, 1, 1, 768) 0 ['block4g_se_squeeze[0][0]']
block4g_se_reduce (Conv2D) (None, 1, 1, 32) 24608 ['block4g_se_reshape[0][0]']
block4g_se_expand (Conv2D) (None, 1, 1, 768) 25344 ['block4g_se_reduce[0][0]']
block4g_se_excite (Multiply) (None, 29, 29, 768) 0 ['block4g_activation[0][0]',
'block4g_se_expand[0][0]']
block4g_project_conv (Conv2D) (None, 29, 29, 128) 98304 ['block4g_se_excite[0][0]']
block4g_project_bn (BatchNorma (None, 29, 29, 128) 512 ['block4g_project_conv[0][0]']
lization)
block4g_drop (FixedDropout) (None, 29, 29, 128) 0 ['block4g_project_bn[0][0]']
block4g_add (Add) (None, 29, 29, 128) 0 ['block4g_drop[0][0]',
'block4f_add[0][0]']
block5a_expand_conv (Conv2D) (None, 29, 29, 768) 98304 ['block4g_add[0][0]']
block5a_expand_bn (BatchNormal (None, 29, 29, 768) 3072 ['block5a_expand_conv[0][0]']
ization)
block5a_expand_activation (Act (None, 29, 29, 768) 0 ['block5a_expand_bn[0][0]']
ivation)
block5a_dwconv (DepthwiseConv2 (None, 29, 29, 768) 19200 ['block5a_expand_activation[0][0]
D) ']
block5a_bn (BatchNormalization (None, 29, 29, 768) 3072 ['block5a_dwconv[0][0]']
)
block5a_activation (Activation (None, 29, 29, 768) 0 ['block5a_bn[0][0]']
)
block5a_se_squeeze (GlobalAver (None, 768) 0 ['block5a_activation[0][0]']
agePooling2D)
block5a_se_reshape (Reshape) (None, 1, 1, 768) 0 ['block5a_se_squeeze[0][0]']
block5a_se_reduce (Conv2D) (None, 1, 1, 32) 24608 ['block5a_se_reshape[0][0]']
block5a_se_expand (Conv2D) (None, 1, 1, 768) 25344 ['block5a_se_reduce[0][0]']
block5a_se_excite (Multiply) (None, 29, 29, 768) 0 ['block5a_activation[0][0]',
'block5a_se_expand[0][0]']
block5a_project_conv (Conv2D) (None, 29, 29, 176) 135168 ['block5a_se_excite[0][0]']
block5a_project_bn (BatchNorma (None, 29, 29, 176) 704 ['block5a_project_conv[0][0]']
lization)
block5b_expand_conv (Conv2D) (None, 29, 29, 1056 185856 ['block5a_project_bn[0][0]']
)
block5b_expand_bn (BatchNormal (None, 29, 29, 1056 4224 ['block5b_expand_conv[0][0]']
ization) )
block5b_expand_activation (Act (None, 29, 29, 1056 0 ['block5b_expand_bn[0][0]']
ivation) )
block5b_dwconv (DepthwiseConv2 (None, 29, 29, 1056 26400 ['block5b_expand_activation[0][0]
D) ) ']
block5b_bn (BatchNormalization (None, 29, 29, 1056 4224 ['block5b_dwconv[0][0]']
) )
block5b_activation (Activation (None, 29, 29, 1056 0 ['block5b_bn[0][0]']
) )
block5b_se_squeeze (GlobalAver (None, 1056) 0 ['block5b_activation[0][0]']
agePooling2D)
block5b_se_reshape (Reshape) (None, 1, 1, 1056) 0 ['block5b_se_squeeze[0][0]']
block5b_se_reduce (Conv2D) (None, 1, 1, 44) 46508 ['block5b_se_reshape[0][0]']
block5b_se_expand (Conv2D) (None, 1, 1, 1056) 47520 ['block5b_se_reduce[0][0]']
block5b_se_excite (Multiply) (None, 29, 29, 1056 0 ['block5b_activation[0][0]',
) 'block5b_se_expand[0][0]']
block5b_project_conv (Conv2D) (None, 29, 29, 176) 185856 ['block5b_se_excite[0][0]']
block5b_project_bn (BatchNorma (None, 29, 29, 176) 704 ['block5b_project_conv[0][0]']
lization)
block5b_drop (FixedDropout) (None, 29, 29, 176) 0 ['block5b_project_bn[0][0]']
block5b_add (Add) (None, 29, 29, 176) 0 ['block5b_drop[0][0]',
'block5a_project_bn[0][0]']
block5c_expand_conv (Conv2D) (None, 29, 29, 1056 185856 ['block5b_add[0][0]']
)
block5c_expand_bn (BatchNormal (None, 29, 29, 1056 4224 ['block5c_expand_conv[0][0]']
ization) )
block5c_expand_activation (Act (None, 29, 29, 1056 0 ['block5c_expand_bn[0][0]']
ivation) )
block5c_dwconv (DepthwiseConv2 (None, 29, 29, 1056 26400 ['block5c_expand_activation[0][0]
D) ) ']
block5c_bn (BatchNormalization (None, 29, 29, 1056 4224 ['block5c_dwconv[0][0]']
) )
block5c_activation (Activation (None, 29, 29, 1056 0 ['block5c_bn[0][0]']
) )
block5c_se_squeeze (GlobalAver (None, 1056) 0 ['block5c_activation[0][0]']
agePooling2D)
block5c_se_reshape (Reshape) (None, 1, 1, 1056) 0 ['block5c_se_squeeze[0][0]']
block5c_se_reduce (Conv2D) (None, 1, 1, 44) 46508 ['block5c_se_reshape[0][0]']
block5c_se_expand (Conv2D) (None, 1, 1, 1056) 47520 ['block5c_se_reduce[0][0]']
block5c_se_excite (Multiply) (None, 29, 29, 1056 0 ['block5c_activation[0][0]',
) 'block5c_se_expand[0][0]']
block5c_project_conv (Conv2D) (None, 29, 29, 176) 185856 ['block5c_se_excite[0][0]']
block5c_project_bn (BatchNorma (None, 29, 29, 176) 704 ['block5c_project_conv[0][0]']
lization)
block5c_drop (FixedDropout) (None, 29, 29, 176) 0 ['block5c_project_bn[0][0]']
block5c_add (Add) (None, 29, 29, 176) 0 ['block5c_drop[0][0]',
'block5b_add[0][0]']
block5d_expand_conv (Conv2D) (None, 29, 29, 1056 185856 ['block5c_add[0][0]']
)
block5d_expand_bn (BatchNormal (None, 29, 29, 1056 4224 ['block5d_expand_conv[0][0]']
ization) )
block5d_expand_activation (Act (None, 29, 29, 1056 0 ['block5d_expand_bn[0][0]']
ivation) )
block5d_dwconv (DepthwiseConv2 (None, 29, 29, 1056 26400 ['block5d_expand_activation[0][0]
D) ) ']
block5d_bn (BatchNormalization (None, 29, 29, 1056 4224 ['block5d_dwconv[0][0]']
) )
block5d_activation (Activation (None, 29, 29, 1056 0 ['block5d_bn[0][0]']
) )
block5d_se_squeeze (GlobalAver (None, 1056) 0 ['block5d_activation[0][0]']
agePooling2D)
block5d_se_reshape (Reshape) (None, 1, 1, 1056) 0 ['block5d_se_squeeze[0][0]']
block5d_se_reduce (Conv2D) (None, 1, 1, 44) 46508 ['block5d_se_reshape[0][0]']
block5d_se_expand (Conv2D) (None, 1, 1, 1056) 47520 ['block5d_se_reduce[0][0]']
block5d_se_excite (Multiply) (None, 29, 29, 1056 0 ['block5d_activation[0][0]',
) 'block5d_se_expand[0][0]']
block5d_project_conv (Conv2D) (None, 29, 29, 176) 185856 ['block5d_se_excite[0][0]']
block5d_project_bn (BatchNorma (None, 29, 29, 176) 704 ['block5d_project_conv[0][0]']
lization)
block5d_drop (FixedDropout) (None, 29, 29, 176) 0 ['block5d_project_bn[0][0]']
block5d_add (Add) (None, 29, 29, 176) 0 ['block5d_drop[0][0]',
'block5c_add[0][0]']
block5e_expand_conv (Conv2D) (None, 29, 29, 1056 185856 ['block5d_add[0][0]']
)
block5e_expand_bn (BatchNormal (None, 29, 29, 1056 4224 ['block5e_expand_conv[0][0]']
ization) )
block5e_expand_activation (Act (None, 29, 29, 1056 0 ['block5e_expand_bn[0][0]']
ivation) )
block5e_dwconv (DepthwiseConv2 (None, 29, 29, 1056 26400 ['block5e_expand_activation[0][0]
D) ) ']
block5e_bn (BatchNormalization (None, 29, 29, 1056 4224 ['block5e_dwconv[0][0]']
) )
block5e_activation (Activation (None, 29, 29, 1056 0 ['block5e_bn[0][0]']
) )
block5e_se_squeeze (GlobalAver (None, 1056) 0 ['block5e_activation[0][0]']
agePooling2D)
block5e_se_reshape (Reshape) (None, 1, 1, 1056) 0 ['block5e_se_squeeze[0][0]']
block5e_se_reduce (Conv2D) (None, 1, 1, 44) 46508 ['block5e_se_reshape[0][0]']
block5e_se_expand (Conv2D) (None, 1, 1, 1056) 47520 ['block5e_se_reduce[0][0]']
block5e_se_excite (Multiply) (None, 29, 29, 1056 0 ['block5e_activation[0][0]',
) 'block5e_se_expand[0][0]']
block5e_project_conv (Conv2D) (None, 29, 29, 176) 185856 ['block5e_se_excite[0][0]']
block5e_project_bn (BatchNorma (None, 29, 29, 176) 704 ['block5e_project_conv[0][0]']
lization)
block5e_drop (FixedDropout) (None, 29, 29, 176) 0 ['block5e_project_bn[0][0]']
block5e_add (Add) (None, 29, 29, 176) 0 ['block5e_drop[0][0]',
'block5d_add[0][0]']
block5f_expand_conv (Conv2D) (None, 29, 29, 1056 185856 ['block5e_add[0][0]']
)
block5f_expand_bn (BatchNormal (None, 29, 29, 1056 4224 ['block5f_expand_conv[0][0]']
ization) )
block5f_expand_activation (Act (None, 29, 29, 1056 0 ['block5f_expand_bn[0][0]']
ivation) )
block5f_dwconv (DepthwiseConv2 (None, 29, 29, 1056 26400 ['block5f_expand_activation[0][0]
D) ) ']
block5f_bn (BatchNormalization (None, 29, 29, 1056 4224 ['block5f_dwconv[0][0]']
) )
block5f_activation (Activation (None, 29, 29, 1056 0 ['block5f_bn[0][0]']
) )
block5f_se_squeeze (GlobalAver (None, 1056) 0 ['block5f_activation[0][0]']
agePooling2D)
block5f_se_reshape (Reshape) (None, 1, 1, 1056) 0 ['block5f_se_squeeze[0][0]']
block5f_se_reduce (Conv2D) (None, 1, 1, 44) 46508 ['block5f_se_reshape[0][0]']
block5f_se_expand (Conv2D) (None, 1, 1, 1056) 47520 ['block5f_se_reduce[0][0]']
block5f_se_excite (Multiply) (None, 29, 29, 1056 0 ['block5f_activation[0][0]',
) 'block5f_se_expand[0][0]']
block5f_project_conv (Conv2D) (None, 29, 29, 176) 185856 ['block5f_se_excite[0][0]']
block5f_project_bn (BatchNorma (None, 29, 29, 176) 704 ['block5f_project_conv[0][0]']
lization)
block5f_drop (FixedDropout) (None, 29, 29, 176) 0 ['block5f_project_bn[0][0]']
block5f_add (Add) (None, 29, 29, 176) 0 ['block5f_drop[0][0]',
'block5e_add[0][0]']
block5g_expand_conv (Conv2D) (None, 29, 29, 1056 185856 ['block5f_add[0][0]']
)
block5g_expand_bn (BatchNormal (None, 29, 29, 1056 4224 ['block5g_expand_conv[0][0]']
ization) )
block5g_expand_activation (Act (None, 29, 29, 1056 0 ['block5g_expand_bn[0][0]']
ivation) )
block5g_dwconv (DepthwiseConv2 (None, 29, 29, 1056 26400 ['block5g_expand_activation[0][0]
D) ) ']
block5g_bn (BatchNormalization (None, 29, 29, 1056 4224 ['block5g_dwconv[0][0]']
) )
block5g_activation (Activation (None, 29, 29, 1056 0 ['block5g_bn[0][0]']
) )
block5g_se_squeeze (GlobalAver (None, 1056) 0 ['block5g_activation[0][0]']
agePooling2D)
block5g_se_reshape (Reshape) (None, 1, 1, 1056) 0 ['block5g_se_squeeze[0][0]']
block5g_se_reduce (Conv2D) (None, 1, 1, 44) 46508 ['block5g_se_reshape[0][0]']
block5g_se_expand (Conv2D) (None, 1, 1, 1056) 47520 ['block5g_se_reduce[0][0]']
block5g_se_excite (Multiply) (None, 29, 29, 1056 0 ['block5g_activation[0][0]',
) 'block5g_se_expand[0][0]']
block5g_project_conv (Conv2D) (None, 29, 29, 176) 185856 ['block5g_se_excite[0][0]']
block5g_project_bn (BatchNorma (None, 29, 29, 176) 704 ['block5g_project_conv[0][0]']
lization)
block5g_drop (FixedDropout) (None, 29, 29, 176) 0 ['block5g_project_bn[0][0]']
block5g_add (Add) (None, 29, 29, 176) 0 ['block5g_drop[0][0]',
'block5f_add[0][0]']
block6a_expand_conv (Conv2D) (None, 29, 29, 1056 185856 ['block5g_add[0][0]']
)
block6a_expand_bn (BatchNormal (None, 29, 29, 1056 4224 ['block6a_expand_conv[0][0]']
ization) )
block6a_expand_activation (Act (None, 29, 29, 1056 0 ['block6a_expand_bn[0][0]']
ivation) )
block6a_dwconv (DepthwiseConv2 (None, 15, 15, 1056 26400 ['block6a_expand_activation[0][0]
D) ) ']
block6a_bn (BatchNormalization (None, 15, 15, 1056 4224 ['block6a_dwconv[0][0]']
) )
block6a_activation (Activation (None, 15, 15, 1056 0 ['block6a_bn[0][0]']
) )
block6a_se_squeeze (GlobalAver (None, 1056) 0 ['block6a_activation[0][0]']
agePooling2D)
block6a_se_reshape (Reshape) (None, 1, 1, 1056) 0 ['block6a_se_squeeze[0][0]']
block6a_se_reduce (Conv2D) (None, 1, 1, 44) 46508 ['block6a_se_reshape[0][0]']
block6a_se_expand (Conv2D) (None, 1, 1, 1056) 47520 ['block6a_se_reduce[0][0]']
block6a_se_excite (Multiply) (None, 15, 15, 1056 0 ['block6a_activation[0][0]',
) 'block6a_se_expand[0][0]']
block6a_project_conv (Conv2D) (None, 15, 15, 304) 321024 ['block6a_se_excite[0][0]']
block6a_project_bn (BatchNorma (None, 15, 15, 304) 1216 ['block6a_project_conv[0][0]']
lization)
block6b_expand_conv (Conv2D) (None, 15, 15, 1824 554496 ['block6a_project_bn[0][0]']
)
block6b_expand_bn (BatchNormal (None, 15, 15, 1824 7296 ['block6b_expand_conv[0][0]']
ization) )
block6b_expand_activation (Act (None, 15, 15, 1824 0 ['block6b_expand_bn[0][0]']
ivation) )
block6b_dwconv (DepthwiseConv2 (None, 15, 15, 1824 45600 ['block6b_expand_activation[0][0]
D) ) ']
block6b_bn (BatchNormalization (None, 15, 15, 1824 7296 ['block6b_dwconv[0][0]']
) )
block6b_activation (Activation (None, 15, 15, 1824 0 ['block6b_bn[0][0]']
) )
block6b_se_squeeze (GlobalAver (None, 1824) 0 ['block6b_activation[0][0]']
agePooling2D)
block6b_se_reshape (Reshape) (None, 1, 1, 1824) 0 ['block6b_se_squeeze[0][0]']
block6b_se_reduce (Conv2D) (None, 1, 1, 76) 138700 ['block6b_se_reshape[0][0]']
block6b_se_expand (Conv2D) (None, 1, 1, 1824) 140448 ['block6b_se_reduce[0][0]']
block6b_se_excite (Multiply) (None, 15, 15, 1824 0 ['block6b_activation[0][0]',
) 'block6b_se_expand[0][0]']
block6b_project_conv (Conv2D) (None, 15, 15, 304) 554496 ['block6b_se_excite[0][0]']
block6b_project_bn (BatchNorma (None, 15, 15, 304) 1216 ['block6b_project_conv[0][0]']
lization)
block6b_drop (FixedDropout) (None, 15, 15, 304) 0 ['block6b_project_bn[0][0]']
block6b_add (Add) (None, 15, 15, 304) 0 ['block6b_drop[0][0]',
'block6a_project_bn[0][0]']
block6c_expand_conv (Conv2D) (None, 15, 15, 1824 554496 ['block6b_add[0][0]']
)
block6c_expand_bn (BatchNormal (None, 15, 15, 1824 7296 ['block6c_expand_conv[0][0]']
ization) )
block6c_expand_activation (Act (None, 15, 15, 1824 0 ['block6c_expand_bn[0][0]']
ivation) )
block6c_dwconv (DepthwiseConv2 (None, 15, 15, 1824 45600 ['block6c_expand_activation[0][0]
D) ) ']
block6c_bn (BatchNormalization (None, 15, 15, 1824 7296 ['block6c_dwconv[0][0]']
) )
block6c_activation (Activation (None, 15, 15, 1824 0 ['block6c_bn[0][0]']
) )
block6c_se_squeeze (GlobalAver (None, 1824) 0 ['block6c_activation[0][0]']
agePooling2D)
block6c_se_reshape (Reshape) (None, 1, 1, 1824) 0 ['block6c_se_squeeze[0][0]']
block6c_se_reduce (Conv2D) (None, 1, 1, 76) 138700 ['block6c_se_reshape[0][0]']
block6c_se_expand (Conv2D) (None, 1, 1, 1824) 140448 ['block6c_se_reduce[0][0]']
block6c_se_excite (Multiply) (None, 15, 15, 1824 0 ['block6c_activation[0][0]',
) 'block6c_se_expand[0][0]']
block6c_project_conv (Conv2D) (None, 15, 15, 304) 554496 ['block6c_se_excite[0][0]']
block6c_project_bn (BatchNorma (None, 15, 15, 304) 1216 ['block6c_project_conv[0][0]']
lization)
block6c_drop (FixedDropout) (None, 15, 15, 304) 0 ['block6c_project_bn[0][0]']
block6c_add (Add) (None, 15, 15, 304) 0 ['block6c_drop[0][0]',
'block6b_add[0][0]']
block6d_expand_conv (Conv2D) (None, 15, 15, 1824 554496 ['block6c_add[0][0]']
)
block6d_expand_bn (BatchNormal (None, 15, 15, 1824 7296 ['block6d_expand_conv[0][0]']
ization) )
block6d_expand_activation (Act (None, 15, 15, 1824 0 ['block6d_expand_bn[0][0]']
ivation) )
block6d_dwconv (DepthwiseConv2 (None, 15, 15, 1824 45600 ['block6d_expand_activation[0][0]
D) ) ']
block6d_bn (BatchNormalization (None, 15, 15, 1824 7296 ['block6d_dwconv[0][0]']
) )
block6d_activation (Activation (None, 15, 15, 1824 0 ['block6d_bn[0][0]']
) )
block6d_se_squeeze (GlobalAver (None, 1824) 0 ['block6d_activation[0][0]']
agePooling2D)
block6d_se_reshape (Reshape) (None, 1, 1, 1824) 0 ['block6d_se_squeeze[0][0]']
block6d_se_reduce (Conv2D) (None, 1, 1, 76) 138700 ['block6d_se_reshape[0][0]']
block6d_se_expand (Conv2D) (None, 1, 1, 1824) 140448 ['block6d_se_reduce[0][0]']
block6d_se_excite (Multiply) (None, 15, 15, 1824 0 ['block6d_activation[0][0]',
) 'block6d_se_expand[0][0]']
block6d_project_conv (Conv2D) (None, 15, 15, 304) 554496 ['block6d_se_excite[0][0]']
block6d_project_bn (BatchNorma (None, 15, 15, 304) 1216 ['block6d_project_conv[0][0]']
lization)
block6d_drop (FixedDropout) (None, 15, 15, 304) 0 ['block6d_project_bn[0][0]']
block6d_add (Add) (None, 15, 15, 304) 0 ['block6d_drop[0][0]',
'block6c_add[0][0]']
block6e_expand_conv (Conv2D) (None, 15, 15, 1824 554496 ['block6d_add[0][0]']
)
block6e_expand_bn (BatchNormal (None, 15, 15, 1824 7296 ['block6e_expand_conv[0][0]']
ization) )
block6e_expand_activation (Act (None, 15, 15, 1824 0 ['block6e_expand_bn[0][0]']
ivation) )
block6e_dwconv (DepthwiseConv2 (None, 15, 15, 1824 45600 ['block6e_expand_activation[0][0]
D) ) ']
block6e_bn (BatchNormalization (None, 15, 15, 1824 7296 ['block6e_dwconv[0][0]']
) )
block6e_activation (Activation (None, 15, 15, 1824 0 ['block6e_bn[0][0]']
) )
block6e_se_squeeze (GlobalAver (None, 1824) 0 ['block6e_activation[0][0]']
agePooling2D)
block6e_se_reshape (Reshape) (None, 1, 1, 1824) 0 ['block6e_se_squeeze[0][0]']
block6e_se_reduce (Conv2D) (None, 1, 1, 76) 138700 ['block6e_se_reshape[0][0]']
block6e_se_expand (Conv2D) (None, 1, 1, 1824) 140448 ['block6e_se_reduce[0][0]']
block6e_se_excite (Multiply) (None, 15, 15, 1824 0 ['block6e_activation[0][0]',
) 'block6e_se_expand[0][0]']
block6e_project_conv (Conv2D) (None, 15, 15, 304) 554496 ['block6e_se_excite[0][0]']
block6e_project_bn (BatchNorma (None, 15, 15, 304) 1216 ['block6e_project_conv[0][0]']
lization)
block6e_drop (FixedDropout) (None, 15, 15, 304) 0 ['block6e_project_bn[0][0]']
block6e_add (Add) (None, 15, 15, 304) 0 ['block6e_drop[0][0]',
'block6d_add[0][0]']
block6f_expand_conv (Conv2D) (None, 15, 15, 1824 554496 ['block6e_add[0][0]']
)
block6f_expand_bn (BatchNormal (None, 15, 15, 1824 7296 ['block6f_expand_conv[0][0]']
ization) )
block6f_expand_activation (Act (None, 15, 15, 1824 0 ['block6f_expand_bn[0][0]']
ivation) )
block6f_dwconv (DepthwiseConv2 (None, 15, 15, 1824 45600 ['block6f_expand_activation[0][0]
D) ) ']
block6f_bn (BatchNormalization (None, 15, 15, 1824 7296 ['block6f_dwconv[0][0]']
) )
block6f_activation (Activation (None, 15, 15, 1824 0 ['block6f_bn[0][0]']
) )
block6f_se_squeeze (GlobalAver (None, 1824) 0 ['block6f_activation[0][0]']
agePooling2D)
block6f_se_reshape (Reshape) (None, 1, 1, 1824) 0 ['block6f_se_squeeze[0][0]']
block6f_se_reduce (Conv2D) (None, 1, 1, 76) 138700 ['block6f_se_reshape[0][0]']
block6f_se_expand (Conv2D) (None, 1, 1, 1824) 140448 ['block6f_se_reduce[0][0]']
block6f_se_excite (Multiply) (None, 15, 15, 1824 0 ['block6f_activation[0][0]',
) 'block6f_se_expand[0][0]']
block6f_project_conv (Conv2D) (None, 15, 15, 304) 554496 ['block6f_se_excite[0][0]']
block6f_project_bn (BatchNorma (None, 15, 15, 304) 1216 ['block6f_project_conv[0][0]']
lization)
block6f_drop (FixedDropout) (None, 15, 15, 304) 0 ['block6f_project_bn[0][0]']
block6f_add (Add) (None, 15, 15, 304) 0 ['block6f_drop[0][0]',
'block6e_add[0][0]']
block6g_expand_conv (Conv2D) (None, 15, 15, 1824 554496 ['block6f_add[0][0]']
)
block6g_expand_bn (BatchNormal (None, 15, 15, 1824 7296 ['block6g_expand_conv[0][0]']
ization) )
block6g_expand_activation (Act (None, 15, 15, 1824 0 ['block6g_expand_bn[0][0]']
ivation) )
block6g_dwconv (DepthwiseConv2 (None, 15, 15, 1824 45600 ['block6g_expand_activation[0][0]
D) ) ']
block6g_bn (BatchNormalization (None, 15, 15, 1824 7296 ['block6g_dwconv[0][0]']
) )
block6g_activation (Activation (None, 15, 15, 1824 0 ['block6g_bn[0][0]']
) )
block6g_se_squeeze (GlobalAver (None, 1824) 0 ['block6g_activation[0][0]']
agePooling2D)
block6g_se_reshape (Reshape) (None, 1, 1, 1824) 0 ['block6g_se_squeeze[0][0]']
block6g_se_reduce (Conv2D) (None, 1, 1, 76) 138700 ['block6g_se_reshape[0][0]']
block6g_se_expand (Conv2D) (None, 1, 1, 1824) 140448 ['block6g_se_reduce[0][0]']
block6g_se_excite (Multiply) (None, 15, 15, 1824 0 ['block6g_activation[0][0]',
) 'block6g_se_expand[0][0]']
block6g_project_conv (Conv2D) (None, 15, 15, 304) 554496 ['block6g_se_excite[0][0]']
block6g_project_bn (BatchNorma (None, 15, 15, 304) 1216 ['block6g_project_conv[0][0]']
lization)
block6g_drop (FixedDropout) (None, 15, 15, 304) 0 ['block6g_project_bn[0][0]']
block6g_add (Add) (None, 15, 15, 304) 0 ['block6g_drop[0][0]',
'block6f_add[0][0]']
block6h_expand_conv (Conv2D) (None, 15, 15, 1824 554496 ['block6g_add[0][0]']
)
block6h_expand_bn (BatchNormal (None, 15, 15, 1824 7296 ['block6h_expand_conv[0][0]']
ization) )
block6h_expand_activation (Act (None, 15, 15, 1824 0 ['block6h_expand_bn[0][0]']
ivation) )
block6h_dwconv (DepthwiseConv2 (None, 15, 15, 1824 45600 ['block6h_expand_activation[0][0]
D) ) ']
block6h_bn (BatchNormalization (None, 15, 15, 1824 7296 ['block6h_dwconv[0][0]']
) )
block6h_activation (Activation (None, 15, 15, 1824 0 ['block6h_bn[0][0]']
) )
block6h_se_squeeze (GlobalAver (None, 1824) 0 ['block6h_activation[0][0]']
agePooling2D)
block6h_se_reshape (Reshape) (None, 1, 1, 1824) 0 ['block6h_se_squeeze[0][0]']
block6h_se_reduce (Conv2D) (None, 1, 1, 76) 138700 ['block6h_se_reshape[0][0]']
block6h_se_expand (Conv2D) (None, 1, 1, 1824) 140448 ['block6h_se_reduce[0][0]']
block6h_se_excite (Multiply) (None, 15, 15, 1824 0 ['block6h_activation[0][0]',
) 'block6h_se_expand[0][0]']
block6h_project_conv (Conv2D) (None, 15, 15, 304) 554496 ['block6h_se_excite[0][0]']
block6h_project_bn (BatchNorma (None, 15, 15, 304) 1216 ['block6h_project_conv[0][0]']
lization)
block6h_drop (FixedDropout) (None, 15, 15, 304) 0 ['block6h_project_bn[0][0]']
block6h_add (Add) (None, 15, 15, 304) 0 ['block6h_drop[0][0]',
'block6g_add[0][0]']
block6i_expand_conv (Conv2D) (None, 15, 15, 1824 554496 ['block6h_add[0][0]']
)
block6i_expand_bn (BatchNormal (None, 15, 15, 1824 7296 ['block6i_expand_conv[0][0]']
ization) )
block6i_expand_activation (Act (None, 15, 15, 1824 0 ['block6i_expand_bn[0][0]']
ivation) )
block6i_dwconv (DepthwiseConv2 (None, 15, 15, 1824 45600 ['block6i_expand_activation[0][0]
D) ) ']
block6i_bn (BatchNormalization (None, 15, 15, 1824 7296 ['block6i_dwconv[0][0]']
) )
block6i_activation (Activation (None, 15, 15, 1824 0 ['block6i_bn[0][0]']
) )
block6i_se_squeeze (GlobalAver (None, 1824) 0 ['block6i_activation[0][0]']
agePooling2D)
block6i_se_reshape (Reshape) (None, 1, 1, 1824) 0 ['block6i_se_squeeze[0][0]']
block6i_se_reduce (Conv2D) (None, 1, 1, 76) 138700 ['block6i_se_reshape[0][0]']
block6i_se_expand (Conv2D) (None, 1, 1, 1824) 140448 ['block6i_se_reduce[0][0]']
block6i_se_excite (Multiply) (None, 15, 15, 1824 0 ['block6i_activation[0][0]',
) 'block6i_se_expand[0][0]']
block6i_project_conv (Conv2D) (None, 15, 15, 304) 554496 ['block6i_se_excite[0][0]']
block6i_project_bn (BatchNorma (None, 15, 15, 304) 1216 ['block6i_project_conv[0][0]']
lization)
block6i_drop (FixedDropout) (None, 15, 15, 304) 0 ['block6i_project_bn[0][0]']
block6i_add (Add) (None, 15, 15, 304) 0 ['block6i_drop[0][0]',
'block6h_add[0][0]']
block7a_expand_conv (Conv2D) (None, 15, 15, 1824 554496 ['block6i_add[0][0]']
)
block7a_expand_bn (BatchNormal (None, 15, 15, 1824 7296 ['block7a_expand_conv[0][0]']
ization) )
block7a_expand_activation (Act (None, 15, 15, 1824 0 ['block7a_expand_bn[0][0]']
ivation) )
block7a_dwconv (DepthwiseConv2 (None, 15, 15, 1824 16416 ['block7a_expand_activation[0][0]
D) ) ']
block7a_bn (BatchNormalization (None, 15, 15, 1824 7296 ['block7a_dwconv[0][0]']
) )
block7a_activation (Activation (None, 15, 15, 1824 0 ['block7a_bn[0][0]']
) )
block7a_se_squeeze (GlobalAver (None, 1824) 0 ['block7a_activation[0][0]']
agePooling2D)
block7a_se_reshape (Reshape) (None, 1, 1, 1824) 0 ['block7a_se_squeeze[0][0]']
block7a_se_reduce (Conv2D) (None, 1, 1, 76) 138700 ['block7a_se_reshape[0][0]']
block7a_se_expand (Conv2D) (None, 1, 1, 1824) 140448 ['block7a_se_reduce[0][0]']
block7a_se_excite (Multiply) (None, 15, 15, 1824 0 ['block7a_activation[0][0]',
) 'block7a_se_expand[0][0]']
block7a_project_conv (Conv2D) (None, 15, 15, 512) 933888 ['block7a_se_excite[0][0]']
block7a_project_bn (BatchNorma (None, 15, 15, 512) 2048 ['block7a_project_conv[0][0]']
lization)
block7b_expand_conv (Conv2D) (None, 15, 15, 3072 1572864 ['block7a_project_bn[0][0]']
)
block7b_expand_bn (BatchNormal (None, 15, 15, 3072 12288 ['block7b_expand_conv[0][0]']
ization) )
block7b_expand_activation (Act (None, 15, 15, 3072 0 ['block7b_expand_bn[0][0]']
ivation) )
block7b_dwconv (DepthwiseConv2 (None, 15, 15, 3072 27648 ['block7b_expand_activation[0][0]
D) ) ']
block7b_bn (BatchNormalization (None, 15, 15, 3072 12288 ['block7b_dwconv[0][0]']
) )
block7b_activation (Activation (None, 15, 15, 3072 0 ['block7b_bn[0][0]']
) )
block7b_se_squeeze (GlobalAver (None, 3072) 0 ['block7b_activation[0][0]']
agePooling2D)
block7b_se_reshape (Reshape) (None, 1, 1, 3072) 0 ['block7b_se_squeeze[0][0]']
block7b_se_reduce (Conv2D) (None, 1, 1, 128) 393344 ['block7b_se_reshape[0][0]']
block7b_se_expand (Conv2D) (None, 1, 1, 3072) 396288 ['block7b_se_reduce[0][0]']
block7b_se_excite (Multiply) (None, 15, 15, 3072 0 ['block7b_activation[0][0]',
) 'block7b_se_expand[0][0]']
block7b_project_conv (Conv2D) (None, 15, 15, 512) 1572864 ['block7b_se_excite[0][0]']
block7b_project_bn (BatchNorma (None, 15, 15, 512) 2048 ['block7b_project_conv[0][0]']
lization)
block7b_drop (FixedDropout) (None, 15, 15, 512) 0 ['block7b_project_bn[0][0]']
block7b_add (Add) (None, 15, 15, 512) 0 ['block7b_drop[0][0]',
'block7a_project_bn[0][0]']
block7c_expand_conv (Conv2D) (None, 15, 15, 3072 1572864 ['block7b_add[0][0]']
)
block7c_expand_bn (BatchNormal (None, 15, 15, 3072 12288 ['block7c_expand_conv[0][0]']
ization) )
block7c_expand_activation (Act (None, 15, 15, 3072 0 ['block7c_expand_bn[0][0]']
ivation) )
block7c_dwconv (DepthwiseConv2 (None, 15, 15, 3072 27648 ['block7c_expand_activation[0][0]
D) ) ']
block7c_bn (BatchNormalization (None, 15, 15, 3072 12288 ['block7c_dwconv[0][0]']
) )
block7c_activation (Activation (None, 15, 15, 3072 0 ['block7c_bn[0][0]']
) )
block7c_se_squeeze (GlobalAver (None, 3072) 0 ['block7c_activation[0][0]']
agePooling2D)
block7c_se_reshape (Reshape) (None, 1, 1, 3072) 0 ['block7c_se_squeeze[0][0]']
block7c_se_reduce (Conv2D) (None, 1, 1, 128) 393344 ['block7c_se_reshape[0][0]']
block7c_se_expand (Conv2D) (None, 1, 1, 3072) 396288 ['block7c_se_reduce[0][0]']
block7c_se_excite (Multiply) (None, 15, 15, 3072 0 ['block7c_activation[0][0]',
) 'block7c_se_expand[0][0]']
block7c_project_conv (Conv2D) (None, 15, 15, 512) 1572864 ['block7c_se_excite[0][0]']
block7c_project_bn (BatchNorma (None, 15, 15, 512) 2048 ['block7c_project_conv[0][0]']
lization)
block7c_drop (FixedDropout) (None, 15, 15, 512) 0 ['block7c_project_bn[0][0]']
block7c_add (Add) (None, 15, 15, 512) 0 ['block7c_drop[0][0]',
'block7b_add[0][0]']
top_conv (Conv2D) (None, 15, 15, 2048 1048576 ['block7c_add[0][0]']
)
top_bn (BatchNormalization) (None, 15, 15, 2048 8192 ['top_conv[0][0]']
)
top_activation (Activation) (None, 15, 15, 2048 0 ['top_bn[0][0]']
)
flatten_5 (Flatten) (None, 460800) 0 ['top_activation[0][0]']
dense_10 (Dense) (None, 1024) 471860224 ['flatten_5[0][0]']
dropout_5 (Dropout) (None, 1024) 0 ['dense_10[0][0]']
dense_11 (Dense) (None, 1) 1025 ['dropout_5[0][0]']
==================================================================================================
Total params: 500,374,769
Trainable params: 471,861,249
Non-trainable params: 28,513,520
__________________________________________________________________________________________________
#get total parameters
model_params = model_final.count_params()
# Specify the optimizer, loss function and evaluation metrics.
model_final.compile(loss='binary_crossentropy', optimizer=tf.keras.optimizers.RMSprop(learning_rate=0.0001), metrics=['accuracy'])
t1 = time.time()
#train the model
eff_history = model_final.fit_generator(train_generator, validation_data = validation_generator, steps_per_epoch = 100, epochs = 10)
fit_time = time.time() - t1
<ipython-input-123-b7f31b017b18>:3: UserWarning: `Model.fit_generator` is deprecated and will be removed in a future version. Please use `Model.fit`, which supports generators. eff_history = model_final.fit_generator(train_generator, validation_data = validation_generator, steps_per_epoch = 100, epochs = 10)
Epoch 1/10 100/100 [==============================] - 138s 1s/step - loss: 1.0984 - accuracy: 0.9335 - val_loss: 0.1210 - val_accuracy: 0.9950 Epoch 2/10 100/100 [==============================] - 117s 1s/step - loss: 0.7081 - accuracy: 0.9645 - val_loss: 0.1103 - val_accuracy: 0.9937 Epoch 3/10 100/100 [==============================] - 117s 1s/step - loss: 0.8698 - accuracy: 0.9580 - val_loss: 0.1068 - val_accuracy: 0.9962 Epoch 4/10 100/100 [==============================] - 116s 1s/step - loss: 0.4198 - accuracy: 0.9775 - val_loss: 0.0582 - val_accuracy: 0.9950 Epoch 5/10 100/100 [==============================] - 116s 1s/step - loss: 1.0802 - accuracy: 0.9625 - val_loss: 0.0098 - val_accuracy: 0.9987 Epoch 6/10 100/100 [==============================] - 115s 1s/step - loss: 0.6919 - accuracy: 0.9743 - val_loss: 0.1645 - val_accuracy: 0.9937 Epoch 7/10 100/100 [==============================] - 117s 1s/step - loss: 0.6089 - accuracy: 0.9750 - val_loss: 0.0482 - val_accuracy: 0.9975 Epoch 8/10 100/100 [==============================] - 116s 1s/step - loss: 0.6994 - accuracy: 0.9798 - val_loss: 0.1010 - val_accuracy: 0.9962 Epoch 9/10 100/100 [==============================] - 116s 1s/step - loss: 0.8074 - accuracy: 0.9713 - val_loss: 0.0526 - val_accuracy: 0.9975 Epoch 10/10 100/100 [==============================] - 116s 1s/step - loss: 0.8600 - accuracy: 0.9680 - val_loss: 0.0529 - val_accuracy: 0.9950
# time it took to fit the model
print(fit_time)
1203.5126857757568
#Plot training and validation accuracy and loss for each epoch
acc = eff_history.history['accuracy']
val_acc = eff_history.history['val_accuracy']
loss = eff_history.history['loss']
val_loss = eff_history.history['val_loss']
epochs = range(1,len(acc) + 1)
plt.plot(epochs,acc,label = 'Training Accuracy')
plt.plot(epochs,val_acc,label = 'Validation Accuracy')
plt.title('Training and Validation Accuracy')
plt.legend()
plt.figure()
plt.plot(epochs,loss,label = 'Training loss')
plt.plot(epochs,val_loss,label = 'Validation Loss')
plt.title('Training and Validation Loss')
plt.legend()
plt.show()
# Test dataset
test_datagen = ImageDataGenerator(rescale=1./255)
test_generator = test_datagen.flow_from_directory(
test_dir,
target_size=(456, 456),
shuffle = False,
class_mode='binary',
batch_size=1)
Found 2023 images belonging to 2 classes.
#Get test length
filenames = test_generator.filenames
nb_samples = len(filenames)
#Predict on test set
predict = model_final.predict_generator(test_generator,steps = nb_samples)
<ipython-input-128-7710eff794cf>:2: UserWarning: `Model.predict_generator` is deprecated and will be removed in a future version. Please use `Model.predict`, which supports generators. predict = model_final.predict_generator(test_generator,steps = nb_samples)
#Get list of prediction results
pred_list = []
for i in predict:
if i > 0.5:
result = 1 #dog
pred_list.append(result)
else:
result = 0 #cat
pred_list.append(result)
#Create dataframe of image ID, image true label, image predicted label
import pandas as pd
image_ids = [name.split('/')[-1] for name in test_generator.filenames]
image_label = [name.split('/')[0] for name in test_generator.filenames]
data = {'id': image_ids, 'label':image_label, 'prediction':pred_list}
data_df = pd.DataFrame(data)
data_df.label.replace(('cats', 'dogs'), (0, 1), inplace=True) # change cat and dog label to 0 or 1
#Get test accuracy score
from sklearn.metrics import accuracy_score, confusion_matrix
test_accuracy = accuracy_score(data_df['label'], data_df['prediction'])
print('Test Accuracy: ', round((test_accuracy * 100), 2), "%")
Test Accuracy: 99.46 %
from sklearn.metrics import classification_report
#Classification Report
print(classification_report(data_df['label'], data_df['prediction']))
precision recall f1-score support
0 1.00 0.99 0.99 1011
1 0.99 1.00 0.99 1012
accuracy 0.99 2023
macro avg 0.99 0.99 0.99 2023
weighted avg 0.99 0.99 0.99 2023
#Create confusion matrix
import seaborn as sns
label = [0, 1] #0 = cat and 1 = dog
cm = confusion_matrix(data_df['label'], data_df['prediction'], labels = label)
#Plot
ax= plt.subplot()
sns.heatmap(cm, annot=True, fmt='g', ax=ax);
# labels, title and ticks
ax.set_xlabel('Predicted labels');ax.set_ylabel('True labels');
ax.set_title('Confusion Matrix');
ax.xaxis.set_ticklabels(["Cat", "Dog"]); ax.yaxis.set_ticklabels(["Cat", "Dog"])
[Text(0, 0.5, 'Cat'), Text(0, 1.5, 'Dog')]
ExperimentLog.loc[len(ExperimentLog)] = [
"EfficientNet B5",
456,
"RMSprop",
10,
max(acc),
max(val_acc),
test_accuracy,
fit_time,
model_params
]
ExperimentLog
| Base Model | Input Resolution | Optimizer | Epochs | Training Accuracy | Validation Accuracy | Test Accuracy | Fit Time | Total Parameters | |
|---|---|---|---|---|---|---|---|---|---|
| 0 | EfficientNet B0 | 224 | RMSprop | 10 | 0.951500 | 0.98750 | 0.971330 | 2195.619411 | 68276893 |
| 1 | EfficientNet B0 with decay | 224 | RMSprop | 10 | 0.956171 | 0.98750 | 0.974790 | 381.655278 | 68276893 |
| 2 | EfficientNet B1 | 240 | RMSprop | 10 | 0.959698 | 0.99125 | 0.985171 | 349.486195 | 90463361 |
| 3 | EfficientNet B2 | 260 | RMSprop | 10 | 0.965239 | 0.99500 | 0.983193 | 380.205944 | 124555763 |
| 4 | EfficientNet B3 | 300 | RMSprop | 10 | 0.970277 | 0.99750 | 0.988631 | 516.328792 | 168071977 |
| 5 | EfficientNet B4 | 380 | RMSprop | 10 | 0.973300 | 0.99875 | 0.987148 | 787.670477 | 281917017 |
| 6 | EfficientNet B5 | 456 | RMSprop | 10 | 0.979849 | 0.99875 | 0.994563 | 1203.512686 | 500374769 |
Reminder (CNN Adam Optimizer epoch 25 results):
Important CNN vs MLP: Our comparison is made using Adam Optimizer and highest scores from both Phases4 vs Phase3.
In general, our Keras CNN Classification Model performs well. Throughout the entire CNN section, we could see how the CNN was performing when switching optimizers and how big improvement it got regarding Phase 3. The Adam optimizer just shows it’s the to go optimizer regarding this whole project. Adam returns 76.2% for train accuracy and a 76% for validation accuracy. Meanwhile, RMSprop returns 71% for train accuracy and 71.3% for validation accuracy. Also, Adam optimizer returns a solid test accuracy of 76% vs RMSprop test accuracy of 70%.
Finally, if we compare Phase 4 vs Phase 3, the improvement is outstanding. Just by looking at the training scores, test scores, classification report, and confusion matrix we can see that this new CNN model is overall better than Phase 3 MLP model (Please read section 4.5.5 Classification Report and Confusion Matrix). Both models had an easier time classifying dog images rather than cat images, however results were very different. The CNN model classifies and predicts for the cat class, this is something the MLP model was struggling with since the beginning (it only classified 7% of cat images). Scores, CNN Adam returns a 76% for test accuracy, highest train accuracy of 76.2%, and highest validation accuracy of 69%. Phase 3 MLP returned a 55.28% for test accuracy, a 55.14% for train accuracy, and almost 57% for validation accuracy. After giving a detailed explanation throughout the CNN section, we can positively say that our new model learned much more than the MLP model, due to this we ended up having great results.
CNN Adam (Highest)
PyTorch MLP 20 epochs (Highest)
Several experiments outside of this notebook were done to establish optimal dropout rate, number of hidden layers, and focusing on a larger number of epochs. Adam optimizer was assumed to perform the best but stochastic gradient descent (SGD) was used for comparison. The two models with Adam optimizers performed better based on training and testing accuracies and lower overall loss functions.
Their testing accuracies and loss functions were exactly the same 0.592 and 0.676. The model with batch size of 25 had slightly higher training accuracy 0.622 (compared to the validation and testing accuracies) which suggests a slight overfitting but its validation and testing accuracies are the same as the other Adam model with larger batch size. Also when taken into account precision from the confusion matrix was slightly higher at recognizing cats and dogs, the last model with Adam optimizer and batch size of 50 performed the best from the three available ones.
After implementing the Alexnet model from phase 3, it was noted the input shape from our original reshape model of (128,128,3) did not match the architecture for kernel output. Upon this discovery, I wanted to reshape the data to match the model for (227,227,3) to improve accuracy scores. After adjustment, accuracy scores already showed a huge improvement( as seen below) in this phase from train accuracy of 53.8% and loss 0.6897, and validation accuracy of 53.8% and loss 0.6857 after 20 epochs in phase 3, to a train of 81.57% accuracy and 0.3866 loss, and validation of 84.73% accuracy and 0.3413 loss after 30 epochs in phase 4. Even though more epochs were done in phase 4, looking at the 20 epoch mark in this phase there is still notable improvement from the prior model. It is also believed if there were more time to run a larger epoch size, like a 100, there would be even more room for improvement of greater accuracy scores and lower losses.
Best Model
Stochastic Gradient Descent (SGD)
Learning Rate = 0.1
import pandas as pd
import io
from IPython.display import Image, display
colab_path = '/content/drive/MyDrive/aml'
display(Image(colab_path + '/Pictures/Accuracy_SGD.png'))
display(Image(colab_path + '/Pictures/Loss_SGD.png'))
df2 = pd.read_csv(colab_path + '/Pictures/resultsAlexSGD1.csv')
print(df2)
Unnamed: 0 train_loss train_accuracy val_loss val_accuracy 0 0 0.692609 0.516440 0.691752 0.500247 1 1 0.690538 0.534148 0.695055 0.500247 2 2 0.689757 0.541196 0.691929 0.499753 3 3 0.690216 0.534341 0.684570 0.607514 4 4 0.680934 0.573188 0.661732 0.624320 5 5 0.673942 0.575854 0.656755 0.621354 6 6 0.661686 0.603402 0.683813 0.529906 7 7 0.651721 0.623969 0.690574 0.540287 8 8 0.644208 0.632982 0.663663 0.613445 9 9 0.641724 0.640599 0.641612 0.638655 10 10 0.626237 0.650248 0.645294 0.637173 11 11 0.643627 0.639584 0.609441 0.689570 12 12 0.616699 0.664339 0.567750 0.717746 13 13 0.585675 0.688714 0.556213 0.722195 14 14 0.571151 0.700647 0.680022 0.650025 15 15 0.565761 0.707249 0.513231 0.752348 16 16 0.547441 0.721341 0.511588 0.743945 17 17 0.548869 0.721975 0.491906 0.765694 18 18 0.527134 0.741145 0.481288 0.764706 19 19 0.494598 0.757268 0.489095 0.773109 20 20 0.496363 0.751809 0.471099 0.776075 21 21 0.452755 0.787610 0.445296 0.794365 22 22 0.457716 0.780754 0.397429 0.823529 23 23 0.437947 0.795227 0.405906 0.819575 24 24 0.413997 0.806830 0.502303 0.769155 25 25 0.441111 0.789768 0.401271 0.813149 26 26 0.391566 0.826457 0.376019 0.833416 27 27 0.390599 0.822833 0.401286 0.817598 28 28 0.372997 0.832170 0.351549 0.840830 29 29 0.372718 0.830011 0.369844 0.833910
Further Experiments
Stochastic Gradient Descent (SGD)
Learning Rate = 0.01
Testing out Adam optimizer did not achieve great results. I first implemented a learning rate of 0.01 but the loss values were so large it was not even worth displaying results. Setting the learning rate to 0.001, there was a slight improvement in loss but accuracy still barely got above 50% after 15 epochs.
display(Image(colab_path + '/Pictures/Accuracy_SGD2.png'))
display(Image(colab_path + '/Pictures/Loss_SGD2.png'))
df = pd.read_csv(colab_path + "/ResultsAlexSGD2.csv")
print(df)
Unnamed: 0 train_loss train_accuracy val_loss val_accuracy 0 0 0.691845 0.527358 0.690975 0.545230 1 1 0.691435 0.542973 0.690424 0.572912 2 2 0.691244 0.550590 0.690041 0.504202 3 3 0.690380 0.555795 0.689150 0.509639 4 4 0.689933 0.556303 0.687872 0.589224 5 5 0.688758 0.575346 0.686866 0.521008 6 6 0.687848 0.575854 0.684655 0.559071 7 7 0.686411 0.567475 0.681666 0.608502 8 8 0.685357 0.568110 0.679547 0.581315 9 9 0.682996 0.574076 0.674300 0.625803
Adam Optimizer
Learning Rate = 0.001
Testing out Adam optimizer did not achieve great results. I first implemented a learning rate of 0.01 but the loss values were so large it was not even worth displaying results. Setting the learning rate to 0.001, there was a slight improvement in loss but accuracy still barely got above 50% after 15 epochs.
display(Image(colab_path + '/Pictures/Accuracy_Adam.png'))
display(Image(colab_path + '/Pictures/Loss_Adam.png'))
df2 = pd.read_csv(colab_path + '/Pictures/resultsAdam.csv')
print(df2)
Unnamed: 0 train_loss train_accuracy val_loss val_accuracy 0 0 0.807833 0.508188 0.693017 0.499259 1 1 0.693115 0.505522 0.693454 0.499753 2 2 0.693362 0.494097 0.693151 0.499753 3 3 0.693269 0.493970 0.693148 0.500247 4 4 0.693300 0.488130 0.693154 0.500247 5 5 0.693350 0.501714 0.693157 0.500247 6 6 0.693214 0.503110 0.693224 0.499753 7 7 0.693450 0.491050 0.693152 0.499753 8 8 0.693194 0.502095 0.693180 0.499753 9 9 0.693187 0.504380 0.693147 0.499753 10 10 0.693168 0.499302 0.693148 0.499753 11 11 0.693201 0.495493 0.693148 0.500247 12 12 0.693250 0.495493 0.693148 0.499753 13 13 0.693278 0.495112 0.693147 0.500247 14 14 0.693143 0.501841 0.693147 0.500247
EfficientNet classification models were utilized to predict if images contained cats or dogs. We accomplished this by performing transfer learning with EfficientNet models pre-trained on ImageNet. We froze the EfficientNet model and added a top layer to integrate the pre-trained model to our cat and dog dataset.
The following models were created:
All EfficientNet models performed well with test accuracies above 97%. EfficientNet B0 is the base model and had the lowest test accuracy with a test accuracy of 97.1%. We added regularization by adding a weight decay of 1e-6 to the RMSProp optimizer. This led to a slight improvement in test accuracy for the EfficientNetB0 model, increasing the test accuracy from 97.1% to 97.4%. So we decided to implement the weight decay on all EfficientNet models going forward.
EfficientNet models implement compound scaling, meaning each model from EfficientNetB0 to EfficientNetB7 will increase in complexity with a set increase in depth of layer of 20%, width of 10%, and image resolution of 15%. The benefit of this compound scaling is evident with the test accuracy increasing with each EffcientNet model. EfficientNetB5 was the most complex model we were able to run and it reached the highest test accuracy score of 99.4%. A graph of the training and validation accurancy for each epoch is presented after each model. Each plot had a good fit with the training and validation accuracy increasing slightly with each epoch. This indicates that the model did not have an issue with overfitting.
Our best classification model in previous phases reached a test accuracy of 55.5%. This classification model was a simple neural network with only one hidden layer. In this phase we implemented a more complex CNN model which reached a test accuracy of 76%. Transfer learning with EfficientNet outperformed both of these classification models. The EfficientNet model had a more complex neural network and the pre-trained model had enough resources to train on a massive dataset (ImageNet). Both of these advantages led to better training accuracy performance.
Our team decided to move forward with EfficientNet instead of EfficientDet because of the ease of implementation with the Keras package, the ability to select a pre-trained model on the ImageNet dataset, and the capacity to use up to EfficientNetB5 without memory limitations. We were not able to implement the most complex models, EfficientNetB6 and EfficientNetB7, due to memory limitations.
Overall, the results were quite an increase from our previous phases. In phase 2, the best results were from the baseline classification model with a test accuracy of 59.4% and in phase 3, the best results were from the multilayer perceptron model (MLP) for classification with a test accuracy of 57%. We treated phase 3 as a learning opportunity for more complex models and applied these skills for our final submission. Taking in consideration these results, improvement from the previous best test accuracy yields a percent change of 74.4%.
Goal:
The goal of this project was to create optimal cat and dog image detection machine learning models. To accomplish this we have created classification models to predict cat or dog labels and regression pipelines to predict bounding boxes. Our success was measured through class scoreboard submissions and formal reports. This allowed us to compare our model performance to the rest of the class and receive feedback on how to improve our model implementations.
Current Status:
The top leader board scores for the cat and dog image detection project is posted below. We achieved the highest accuracy score for the groups working on this project. Our top performing model was EfficientNetB5 with a test accuracy score of 99.4%.
We accomplished this by performing transfer learning with EfficientNetB5 pre-trained on ImageNet. We froze the EfficientNetB5 model and added a top layer to integrate the pre-trained model to our cat and dog dataset. The image input resolution was 456, the total parameters in the neural network was 500,374,769, and the total run time was 20 minutes.
The second highest submission on the scoreboard was from Group 24. This group reported a test accuracy of 97% by completing transfer learning with the EfficientDet-D3 model. The image size was 896x896, the total parameters in the neural network was 12 million, and the total run time was 9 hours.
The EfficientDet model is similar to the EfficientNet model we completed. EfficientDet also uses compound scaling to uniformly scale the resolution, depth and width. In EfficientDet models, EfficientNet is the backbone network then a Weighted Bi-directional Feature Pyramid Network (BiFPN) is used for feature fusion.
Identify Gaps:
After researching, our team decided to move forward with the EfficientNet model instead of the EfficientDet model. A drawback of the EffiicentNet model is that it only does image classification tasks. The EfficientDet model is capable of performing classification and regression. In addition, the EfficientDet model which was released by Google in 2020 is the newest implementation of this type of network.
Our team implemented the EfficientNet model for two reasons. The EfficientNet model was released in 2019 and there are more examples of how to implement this model on various datasets. In addition, EfficientNetB0 to B7 can be easily implemented using Keras packages. This made freezing the model and building a top layer simple to implement.
Our team was also concerned about the limited resources we had to run models. We utilized google colab pro to run our models using GPU, but even with this resource we had issues with long run times and memory limitations in the past. We knew that we needed to select a model that was efficient and could achieve optimal performance with limited resources.
Develop Strategies for Closing Gaps:
Our team's strategy to improve the current top model would be to implement the state of the art EfficientDet model. With more time and resources we could freeze the EfficientDet model and build on the top layers to customize the model to our dataset. To account for resources and efficiency, we would limit the model to EfficenetD3. This strategy mergies the techniques applied by the two top performing cat and dog models.
In conclusion, we saw quite an improvement throughout phase 4 comparing our previous models. The best results yielded were from the EfficientNetB5 implementation with a 456 input resolution, and RMSprop optimizer. With AlexNet, there was a vast improvement from phase 3 to phase 4 but would require a larger training time and memory to achieve results that were given by EfficientNetB5. The FCN model was an improvement from previous phase implementation but since this is more for segmentation the results were surpassed by AlexNet and EfficientNetB5. CNN also can see an improvement from previous implementations but was also exceeded by Alexnet and EfficientNetB5.
Overall, it came as no surprise for EfficientNetB5 with RMSprop optimizer and the largest parameter size to outperform CNN, FCN and Alexnet. In order to achieve the best accuracy scores, this model was optimal to perform the cats and dogs detector task as it is currently one of the most powerful architectures for deep-learning classification in terms of processing time and results.
*Note: Any other reference might be within cell blocks along wide code.
Latex equations in Markdown: https://codesolid.com/using-latex-in-python/
Computations:
https://www.w3schools.com/python/gloss_python_check_if_set_item_exists.asp
https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image/ImageDataGenerator
https://studymachinelearning.com/keras-imagedatagenerator-with-flow/
https://keras.io/api/layers/convolution_layers/convolution2d/
https://www.tensorflow.org/api_docs/python/tf/keras/losses/BinaryCrossentropy
https://machinelearningmastery.com/dropout-regularization-deep-learning-models-keras/
https://machinelearningmastery.com/display-deep-learning-model-training-history-in-keras/
https://www.kaggle.com/code/arjunrao2000/beginners-guide-efficientnet-with-keras
https://arxiv.org/pdf/1905.11946.pdf
https://www.youtube.com/watch?v=GOxRSefbBoI&t=1522s
https://www.youtube.com/watch?v=RwQ-5v-kIck&t=624s
https://towardsdatascience.com/cifar-100-transfer-learning-using-efficientnet-ed3ed7b89af2
https://github.com/keras-team/keras/issues/5862#issuecomment-647559571
https://www.kaggle.com/code/arjunrao2000/beginners-guide-efficientnet-with-keras
https://stackoverflow.com/questions/45806669/how-to-use-predict-generator-with-imagedatagenerator
https://iq.opengenus.org/efficientnet/ (diagram)
https://neptune.ai/blog/transfer-learning-guide-examples-for-images-and-text-in-keras
https://www.tensorflow.org/tutorials/images/transfer_learning
https://developer.ibm.com/articles/transfer-learning-for-deep-learning/
https://ai.googleblog.com/2019/05/efficientnet-improving-accuracy-and.html
https://towardsdatascience.com/alexnet-the-architecture-that-challenged-cnns-e406d5297951
Alexnet Input Shape: https://en.wikipedia.org/wiki/AlexNet
Alexnet Architecture: https://medium.com/analytics-vidhya/concept-of-alexnet-convolutional-neural-network-6e73b4f9ee30
https://medium.com/mlearning-ai/alexnet-and-image-classification-8cd8511548b4